00:00:00.001 Started by upstream project "autotest-nightly" build number 3885 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3265 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.154 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.154 The recommended git tool is: git 00:00:00.155 using credential 00000000-0000-0000-0000-000000000002 00:00:00.156 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.217 Fetching changes from the remote Git repository 00:00:00.223 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.267 Using shallow fetch with depth 1 00:00:00.267 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.267 > git --version # timeout=10 00:00:00.294 > git --version # 'git version 2.39.2' 00:00:00.294 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.312 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.312 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.773 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.784 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.795 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:08.795 > git config core.sparsecheckout # timeout=10 00:00:08.806 > git read-tree -mu HEAD # timeout=10 00:00:08.823 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:08.842 Commit message: "inventory: add WCP3 to free inventory" 00:00:08.842 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:08.949 [Pipeline] Start of Pipeline 00:00:08.961 [Pipeline] library 00:00:08.962 Loading library shm_lib@master 00:00:08.962 Library shm_lib@master is cached. Copying from home. 00:00:08.974 [Pipeline] node 00:00:08.983 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:08.984 [Pipeline] { 00:00:08.992 [Pipeline] catchError 00:00:08.993 [Pipeline] { 00:00:09.002 [Pipeline] wrap 00:00:09.009 [Pipeline] { 00:00:09.017 [Pipeline] stage 00:00:09.018 [Pipeline] { (Prologue) 00:00:09.030 [Pipeline] echo 00:00:09.031 Node: VM-host-SM16 00:00:09.035 [Pipeline] cleanWs 00:00:09.043 [WS-CLEANUP] Deleting project workspace... 00:00:09.043 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.048 [WS-CLEANUP] done 00:00:09.210 [Pipeline] setCustomBuildProperty 00:00:09.288 [Pipeline] httpRequest 00:00:09.315 [Pipeline] echo 00:00:09.316 Sorcerer 10.211.164.101 is alive 00:00:09.322 [Pipeline] httpRequest 00:00:09.325 HttpMethod: GET 00:00:09.326 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.326 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.330 Response Code: HTTP/1.1 200 OK 00:00:09.330 Success: Status code 200 is in the accepted range: 200,404 00:00:09.331 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:15.563 [Pipeline] sh 00:00:15.849 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:15.868 [Pipeline] httpRequest 00:00:15.892 [Pipeline] echo 00:00:15.894 Sorcerer 10.211.164.101 is alive 00:00:15.903 [Pipeline] httpRequest 00:00:15.907 HttpMethod: GET 00:00:15.908 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:15.909 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:15.921 Response Code: HTTP/1.1 200 OK 00:00:15.922 Success: Status code 200 is in the accepted range: 200,404 00:00:15.922 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:46.470 [Pipeline] sh 00:00:46.748 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:49.285 [Pipeline] sh 00:00:49.562 + git -C spdk log --oneline -n5 00:00:49.562 719d03c6a sock/uring: only register net impl if supported 00:00:49.562 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:49.562 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:49.562 6c7c1f57e accel: add sequence outstanding stat 00:00:49.562 3bc8e6a26 accel: add utility to put task 00:00:49.580 [Pipeline] writeFile 00:00:49.594 [Pipeline] sh 00:00:49.898 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:49.938 [Pipeline] sh 00:00:50.217 + cat autorun-spdk.conf 00:00:50.217 SPDK_TEST_UNITTEST=1 00:00:50.217 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.217 SPDK_TEST_NVME=1 00:00:50.217 SPDK_TEST_BLOCKDEV=1 00:00:50.217 SPDK_RUN_ASAN=1 00:00:50.217 SPDK_RUN_UBSAN=1 00:00:50.217 SPDK_TEST_RAID5=1 00:00:50.217 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.224 RUN_NIGHTLY=1 00:00:50.226 [Pipeline] } 00:00:50.243 [Pipeline] // stage 00:00:50.260 [Pipeline] stage 00:00:50.262 [Pipeline] { (Run VM) 00:00:50.276 [Pipeline] sh 00:00:50.556 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:50.556 + echo 'Start stage prepare_nvme.sh' 00:00:50.556 Start stage prepare_nvme.sh 00:00:50.556 + [[ -n 4 ]] 00:00:50.556 + disk_prefix=ex4 00:00:50.556 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest ]] 00:00:50.556 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf ]] 00:00:50.556 + source /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf 00:00:50.556 ++ SPDK_TEST_UNITTEST=1 00:00:50.556 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.556 ++ SPDK_TEST_NVME=1 00:00:50.556 ++ SPDK_TEST_BLOCKDEV=1 00:00:50.556 ++ SPDK_RUN_ASAN=1 00:00:50.556 ++ SPDK_RUN_UBSAN=1 00:00:50.556 ++ SPDK_TEST_RAID5=1 00:00:50.556 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.556 ++ RUN_NIGHTLY=1 00:00:50.556 + cd /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:50.556 + nvme_files=() 00:00:50.556 + declare -A nvme_files 00:00:50.556 + backend_dir=/var/lib/libvirt/images/backends 00:00:50.556 + nvme_files['nvme.img']=5G 00:00:50.556 + nvme_files['nvme-cmb.img']=5G 00:00:50.556 + nvme_files['nvme-multi0.img']=4G 00:00:50.556 + nvme_files['nvme-multi1.img']=4G 00:00:50.556 + nvme_files['nvme-multi2.img']=4G 00:00:50.556 + nvme_files['nvme-openstack.img']=8G 00:00:50.556 + nvme_files['nvme-zns.img']=5G 00:00:50.556 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:50.556 + (( SPDK_TEST_FTL == 1 )) 00:00:50.556 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:50.556 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:50.556 + for nvme in "${!nvme_files[@]}" 00:00:50.556 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:00:50.556 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.556 + for nvme in "${!nvme_files[@]}" 00:00:50.556 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:00:50.556 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:50.556 + for nvme in "${!nvme_files[@]}" 00:00:50.556 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:00:50.556 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:50.556 + for nvme in "${!nvme_files[@]}" 00:00:50.556 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:00:50.556 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:50.556 + for nvme in "${!nvme_files[@]}" 00:00:50.556 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:00:50.556 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.556 + for nvme in "${!nvme_files[@]}" 00:00:50.556 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:00:50.556 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.556 + for nvme in "${!nvme_files[@]}" 00:00:50.556 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:00:51.123 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:51.123 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:00:51.123 + echo 'End stage prepare_nvme.sh' 00:00:51.123 End stage prepare_nvme.sh 00:00:51.134 [Pipeline] sh 00:00:51.413 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:51.414 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f ubuntu2004 00:00:51.414 00:00:51.414 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant 00:00:51.414 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk 00:00:51.414 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest 00:00:51.414 HELP=0 00:00:51.414 DRY_RUN=0 00:00:51.414 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img, 00:00:51.414 NVME_DISKS_TYPE=nvme, 00:00:51.414 NVME_AUTO_CREATE=0 00:00:51.414 NVME_DISKS_NAMESPACES=, 00:00:51.414 NVME_CMB=, 00:00:51.414 NVME_PMR=, 00:00:51.414 NVME_ZNS=, 00:00:51.414 NVME_MS=, 00:00:51.414 NVME_FDP=, 00:00:51.414 SPDK_VAGRANT_DISTRO=ubuntu2004 00:00:51.414 SPDK_VAGRANT_VMCPU=10 00:00:51.414 SPDK_VAGRANT_VMRAM=12288 00:00:51.414 SPDK_VAGRANT_PROVIDER=libvirt 00:00:51.414 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:51.414 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:51.414 SPDK_OPENSTACK_NETWORK=0 00:00:51.414 VAGRANT_PACKAGE_BOX=0 00:00:51.414 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:51.414 FORCE_DISTRO=true 00:00:51.414 VAGRANT_BOX_VERSION= 00:00:51.414 EXTRA_VAGRANTFILES= 00:00:51.414 NIC_MODEL=e1000 00:00:51.414 00:00:51.414 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt' 00:00:51.414 /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:53.946 Bringing machine 'default' up with 'libvirt' provider... 00:00:54.514 ==> default: Creating image (snapshot of base box volume). 00:00:54.773 ==> default: Creating domain with the following settings... 00:00:54.773 ==> default: -- Name: ubuntu2004-20.04-1712646987-2220_default_1720869149_dd64f1334015f9cae529 00:00:54.773 ==> default: -- Domain type: kvm 00:00:54.773 ==> default: -- Cpus: 10 00:00:54.773 ==> default: -- Feature: acpi 00:00:54.773 ==> default: -- Feature: apic 00:00:54.773 ==> default: -- Feature: pae 00:00:54.773 ==> default: -- Memory: 12288M 00:00:54.773 ==> default: -- Memory Backing: hugepages: 00:00:54.773 ==> default: -- Management MAC: 00:00:54.773 ==> default: -- Loader: 00:00:54.773 ==> default: -- Nvram: 00:00:54.773 ==> default: -- Base box: spdk/ubuntu2004 00:00:54.773 ==> default: -- Storage pool: default 00:00:54.773 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1712646987-2220_default_1720869149_dd64f1334015f9cae529.img (20G) 00:00:54.773 ==> default: -- Volume Cache: default 00:00:54.773 ==> default: -- Kernel: 00:00:54.773 ==> default: -- Initrd: 00:00:54.773 ==> default: -- Graphics Type: vnc 00:00:54.773 ==> default: -- Graphics Port: -1 00:00:54.773 ==> default: -- Graphics IP: 127.0.0.1 00:00:54.773 ==> default: -- Graphics Password: Not defined 00:00:54.773 ==> default: -- Video Type: cirrus 00:00:54.773 ==> default: -- Video VRAM: 9216 00:00:54.773 ==> default: -- Sound Type: 00:00:54.773 ==> default: -- Keymap: en-us 00:00:54.773 ==> default: -- TPM Path: 00:00:54.773 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:54.773 ==> default: -- Command line args: 00:00:54.773 ==> default: -> value=-device, 00:00:54.773 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:54.773 ==> default: -> value=-drive, 00:00:54.773 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:00:54.773 ==> default: -> value=-device, 00:00:54.773 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.773 ==> default: Creating shared folders metadata... 00:00:54.773 ==> default: Starting domain. 00:00:56.677 ==> default: Waiting for domain to get an IP address... 00:01:06.646 ==> default: Waiting for SSH to become available... 00:01:08.020 ==> default: Configuring and enabling network interfaces... 00:01:09.919 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:16.479 ==> default: Mounting SSHFS shared folder... 00:01:16.479 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:16.480 ==> default: Checking Mount.. 00:01:19.013 ==> default: Checking Mount.. 00:01:19.013 ==> default: Folder Successfully Mounted! 00:01:19.013 ==> default: Running provisioner: file... 00:01:19.013 default: ~/.gitconfig => .gitconfig 00:01:19.272 00:01:19.272 SUCCESS! 00:01:19.272 00:01:19.272 cd to /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:19.272 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:19.272 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:19.272 00:01:19.280 [Pipeline] } 00:01:19.295 [Pipeline] // stage 00:01:19.302 [Pipeline] dir 00:01:19.302 Running in /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt 00:01:19.304 [Pipeline] { 00:01:19.314 [Pipeline] catchError 00:01:19.315 [Pipeline] { 00:01:19.328 [Pipeline] sh 00:01:19.636 + vagrant ssh-config --host vagrant 00:01:19.636 + sed -ne /^Host/,$p 00:01:19.636 + tee ssh_conf 00:01:22.931 Host vagrant 00:01:22.931 HostName 192.168.121.10 00:01:22.931 User vagrant 00:01:22.931 Port 22 00:01:22.931 UserKnownHostsFile /dev/null 00:01:22.931 StrictHostKeyChecking no 00:01:22.931 PasswordAuthentication no 00:01:22.931 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1712646987-2220/libvirt/ubuntu2004 00:01:22.931 IdentitiesOnly yes 00:01:22.931 LogLevel FATAL 00:01:22.931 ForwardAgent yes 00:01:22.931 ForwardX11 yes 00:01:22.931 00:01:22.944 [Pipeline] withEnv 00:01:22.947 [Pipeline] { 00:01:22.965 [Pipeline] sh 00:01:23.247 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:23.247 source /etc/os-release 00:01:23.247 [[ -e /image.version ]] && img=$(< /image.version) 00:01:23.247 # Minimal, systemd-like check. 00:01:23.247 if [[ -e /.dockerenv ]]; then 00:01:23.247 # Clear garbage from the node's name: 00:01:23.247 # agt-er_autotest_547-896 -> autotest_547-896 00:01:23.247 # $HOSTNAME is the actual container id 00:01:23.247 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:23.247 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:23.247 # We can assume this is a mount from a host where container is running, 00:01:23.247 # so fetch its hostname to easily identify the target swarm worker. 00:01:23.247 container="$(< /etc/hostname) ($agent)" 00:01:23.247 else 00:01:23.247 # Fallback 00:01:23.247 container=$agent 00:01:23.247 fi 00:01:23.247 fi 00:01:23.247 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:23.247 00:01:23.825 [Pipeline] } 00:01:23.847 [Pipeline] // withEnv 00:01:23.856 [Pipeline] setCustomBuildProperty 00:01:23.867 [Pipeline] stage 00:01:23.869 [Pipeline] { (Tests) 00:01:23.884 [Pipeline] sh 00:01:24.162 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:24.739 [Pipeline] sh 00:01:25.012 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:25.590 [Pipeline] timeout 00:01:25.590 Timeout set to expire in 1 hr 30 min 00:01:25.592 [Pipeline] { 00:01:25.606 [Pipeline] sh 00:01:25.959 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:26.893 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:01:26.907 [Pipeline] sh 00:01:27.187 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:27.753 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:27.768 [Pipeline] sh 00:01:28.049 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:28.628 [Pipeline] sh 00:01:28.904 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu20-vg-autotest ./autoruner.sh spdk_repo 00:01:29.471 ++ readlink -f spdk_repo 00:01:29.471 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:29.471 + [[ -n /home/vagrant/spdk_repo ]] 00:01:29.471 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:29.471 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:29.471 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:29.471 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:29.471 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:29.471 + [[ ubuntu20-vg-autotest == pkgdep-* ]] 00:01:29.471 + cd /home/vagrant/spdk_repo 00:01:29.471 + source /etc/os-release 00:01:29.471 ++ NAME=Ubuntu 00:01:29.471 ++ VERSION='20.04.6 LTS (Focal Fossa)' 00:01:29.471 ++ ID=ubuntu 00:01:29.471 ++ ID_LIKE=debian 00:01:29.471 ++ PRETTY_NAME='Ubuntu 20.04.6 LTS' 00:01:29.471 ++ VERSION_ID=20.04 00:01:29.471 ++ HOME_URL=https://www.ubuntu.com/ 00:01:29.471 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:29.471 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:29.471 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:29.471 ++ VERSION_CODENAME=focal 00:01:29.471 ++ UBUNTU_CODENAME=focal 00:01:29.471 + uname -a 00:01:29.471 Linux ubuntu2004-cloud-1712646987-2220 5.4.0-176-generic #196-Ubuntu SMP Fri Mar 22 16:46:39 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:29.471 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:29.471 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:29.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:01:29.730 Hugepages 00:01:29.730 node hugesize free / total 00:01:29.730 node0 1048576kB 0 / 0 00:01:29.730 node0 2048kB 0 / 0 00:01:29.730 00:01:29.730 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:29.730 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:29.730 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:29.730 + rm -f /tmp/spdk-ld-path 00:01:29.730 + source autorun-spdk.conf 00:01:29.730 ++ SPDK_TEST_UNITTEST=1 00:01:29.730 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.730 ++ SPDK_TEST_NVME=1 00:01:29.730 ++ SPDK_TEST_BLOCKDEV=1 00:01:29.730 ++ SPDK_RUN_ASAN=1 00:01:29.730 ++ SPDK_RUN_UBSAN=1 00:01:29.730 ++ SPDK_TEST_RAID5=1 00:01:29.730 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.730 ++ RUN_NIGHTLY=1 00:01:29.730 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:29.730 + [[ -n '' ]] 00:01:29.730 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:29.730 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:29.730 + for M in /var/spdk/build-*-manifest.txt 00:01:29.730 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:29.730 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.730 + for M in /var/spdk/build-*-manifest.txt 00:01:29.730 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:29.730 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:29.730 ++ uname 00:01:29.730 + [[ Linux == \L\i\n\u\x ]] 00:01:29.730 + sudo dmesg -T 00:01:29.730 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:29.988 + sudo dmesg --clear 00:01:29.988 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:01:29.988 + dmesg_pid=2384 00:01:29.988 + [[ Ubuntu == FreeBSD ]] 00:01:29.988 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.988 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:29.988 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:29.988 + sudo dmesg -Tw 00:01:29.988 + [[ -x /usr/src/fio-static/fio ]] 00:01:29.988 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:29.988 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:29.988 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:29.988 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:29.988 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:29.988 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:29.988 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:29.988 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:29.988 Test configuration: 00:01:29.988 SPDK_TEST_UNITTEST=1 00:01:29.988 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.988 SPDK_TEST_NVME=1 00:01:29.988 SPDK_TEST_BLOCKDEV=1 00:01:29.988 SPDK_RUN_ASAN=1 00:01:29.988 SPDK_RUN_UBSAN=1 00:01:29.988 SPDK_TEST_RAID5=1 00:01:29.988 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:29.988 RUN_NIGHTLY=1 11:13:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:29.988 11:13:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:29.988 11:13:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:29.988 11:13:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:29.988 11:13:04 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:29.989 11:13:04 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:29.989 11:13:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:29.989 11:13:04 -- paths/export.sh@5 -- $ export PATH 00:01:29.989 11:13:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:29.989 11:13:04 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:29.989 11:13:04 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:29.989 11:13:04 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720869184.XXXXXX 00:01:29.989 11:13:04 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720869184.jEb6cM 00:01:29.989 11:13:04 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:29.989 11:13:04 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:29.989 11:13:04 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:29.989 11:13:04 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:29.989 11:13:04 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:29.989 11:13:04 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:29.989 11:13:04 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:29.989 11:13:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:29.989 11:13:04 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:29.989 11:13:04 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:29.989 11:13:04 -- pm/common@17 -- $ local monitor 00:01:29.989 11:13:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.989 11:13:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:29.989 11:13:04 -- pm/common@25 -- $ sleep 1 00:01:29.989 11:13:04 -- pm/common@21 -- $ date +%s 00:01:29.989 11:13:04 -- pm/common@21 -- $ date +%s 00:01:29.989 11:13:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720869184 00:01:29.989 11:13:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720869184 00:01:29.989 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720869184_collect-vmstat.pm.log 00:01:29.989 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720869184_collect-cpu-load.pm.log 00:01:30.924 11:13:05 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:30.924 11:13:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:30.924 11:13:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:30.924 11:13:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:30.924 11:13:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:30.924 Sat Jul 13 11:13:05 UTC 2024 00:01:30.924 11:13:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:30.924 v24.09-pre-202-g719d03c6a 00:01:30.924 11:13:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:30.924 11:13:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:30.924 11:13:05 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:30.924 11:13:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:30.924 11:13:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:30.924 ************************************ 00:01:30.924 START TEST asan 00:01:30.924 ************************************ 00:01:30.924 using asan 00:01:30.924 11:13:05 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:30.924 00:01:30.924 real 0m0.000s 00:01:30.924 user 0m0.000s 00:01:30.924 sys 0m0.000s 00:01:30.924 ************************************ 00:01:30.924 END TEST asan 00:01:30.924 11:13:05 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:30.924 11:13:05 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:30.924 ************************************ 00:01:31.189 11:13:05 -- common/autotest_common.sh@1142 -- $ return 0 00:01:31.190 11:13:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:31.190 11:13:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:31.190 11:13:05 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:31.190 11:13:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:31.190 11:13:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.190 ************************************ 00:01:31.190 START TEST ubsan 00:01:31.190 ************************************ 00:01:31.190 using ubsan 00:01:31.190 11:13:05 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:31.190 00:01:31.190 real 0m0.000s 00:01:31.190 user 0m0.000s 00:01:31.190 sys 0m0.000s 00:01:31.190 ************************************ 00:01:31.190 END TEST ubsan 00:01:31.190 11:13:05 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:31.190 11:13:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:31.190 ************************************ 00:01:31.190 11:13:05 -- common/autotest_common.sh@1142 -- $ return 0 00:01:31.190 11:13:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:31.190 11:13:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:31.190 11:13:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:31.190 11:13:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:31.190 11:13:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:31.190 11:13:05 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:31.190 11:13:05 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:31.190 11:13:05 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:01:31.190 11:13:05 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:31.190 11:13:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:31.190 11:13:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.190 ************************************ 00:01:31.190 START TEST unittest_build 00:01:31.190 ************************************ 00:01:31.190 11:13:05 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:01:31.190 11:13:05 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:31.190 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:31.190 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:31.448 Using 'verbs' RDMA provider 00:01:47.256 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:59.454 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:59.454 Creating mk/config.mk...done. 00:01:59.454 Creating mk/cc.flags.mk...done. 00:01:59.454 Type 'make' to build. 00:01:59.454 11:13:33 unittest_build -- common/autobuild_common.sh@412 -- $ make -j10 00:01:59.454 make[1]: Nothing to be done for 'all'. 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:01.982 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.240 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.499 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.757 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.757 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.757 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.757 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.757 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.757 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:02.757 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.016 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.016 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.016 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.016 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.016 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.275 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.275 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.275 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.275 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.275 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.275 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.275 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.275 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.275 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.533 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.792 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.792 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.792 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.792 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.792 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.792 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.792 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.792 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:03.792 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.050 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.051 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.309 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.309 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.309 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.309 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.567 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.567 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.567 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.567 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.567 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.825 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.825 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:04.825 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.083 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.083 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.083 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.083 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.083 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.083 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.341 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.341 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.341 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.341 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.341 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.341 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.341 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.341 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.599 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.599 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.599 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.599 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.599 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.857 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.857 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.857 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.857 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:05.857 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.115 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.115 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.115 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.373 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.373 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.373 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.373 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.373 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.373 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.631 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.631 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.631 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.631 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.631 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.631 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.631 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.631 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.631 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.889 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.889 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.889 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:06.889 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.148 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.148 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.148 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.148 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.405 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.405 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.405 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.406 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.406 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.406 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.664 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.664 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.664 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.664 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.664 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.664 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.664 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.664 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:07.922 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.180 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.180 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.180 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.180 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.180 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.180 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.180 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.437 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.438 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.438 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.438 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.438 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.695 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.695 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.695 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.695 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.953 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.953 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.953 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.953 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.953 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:08.953 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.211 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.211 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.211 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.211 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.211 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.211 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.211 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.211 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.211 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.470 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.470 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.470 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.470 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.470 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.470 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.470 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.729 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.729 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.729 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.729 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.729 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.729 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.988 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.988 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.988 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.988 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.988 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.988 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.988 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.988 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:09.988 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.247 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.247 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.505 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.505 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.505 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.506 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.764 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.764 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.764 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.764 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.764 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.764 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:10.764 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.023 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.281 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.281 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.281 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.847 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.847 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.847 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:11.847 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.105 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.364 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.623 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.623 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:12.881 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.149 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.412 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.412 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.412 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.412 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.671 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.671 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.671 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.671 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.671 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.930 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.930 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.930 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:13.930 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.189 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.189 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.189 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.189 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.189 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.447 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.706 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.706 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.706 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.706 The Meson build system 00:02:14.706 Version: 1.4.0 00:02:14.706 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:14.706 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:14.706 Build type: native build 00:02:14.706 Program cat found: YES (/usr/bin/cat) 00:02:14.706 Project name: DPDK 00:02:14.706 Project version: 24.03.0 00:02:14.706 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:02:14.706 C linker for the host machine: cc ld.bfd 2.34 00:02:14.706 Host machine cpu family: x86_64 00:02:14.706 Host machine cpu: x86_64 00:02:14.706 Message: ## Building in Developer Mode ## 00:02:14.706 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.706 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.706 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.706 Program python3 found: YES (/usr/bin/python3) 00:02:14.706 Program cat found: YES (/usr/bin/cat) 00:02:14.706 Compiler for C supports arguments -march=native: YES 00:02:14.706 Checking for size of "void *" : 8 00:02:14.706 Checking for size of "void *" : 8 (cached) 00:02:14.706 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:14.706 Library m found: YES 00:02:14.706 Library numa found: YES 00:02:14.706 Has header "numaif.h" : YES 00:02:14.706 Library fdt found: NO 00:02:14.706 Library execinfo found: NO 00:02:14.706 Has header "execinfo.h" : YES 00:02:14.706 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:02:14.706 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.706 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.706 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.706 Run-time dependency openssl found: YES 1.1.1f 00:02:14.706 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:14.706 Library pcap found: NO 00:02:14.706 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.706 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.706 Compiler for C supports arguments -Wformat: YES 00:02:14.706 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:14.706 Compiler for C supports arguments -Wformat-security: YES 00:02:14.706 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.706 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.706 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.706 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.706 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.706 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.706 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.706 Compiler for C supports arguments -Wundef: YES 00:02:14.706 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.706 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.706 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.706 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.706 Program objdump found: YES (/usr/bin/objdump) 00:02:14.706 Compiler for C supports arguments -mavx512f: YES 00:02:14.706 Checking if "AVX512 checking" compiles: YES 00:02:14.706 Fetching value of define "__SSE4_2__" : 1 00:02:14.706 Fetching value of define "__AES__" : 1 00:02:14.706 Fetching value of define "__AVX__" : 1 00:02:14.706 Fetching value of define "__AVX2__" : 1 00:02:14.706 Fetching value of define "__AVX512BW__" : (undefined) 00:02:14.706 Fetching value of define "__AVX512CD__" : (undefined) 00:02:14.706 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:14.706 Fetching value of define "__AVX512F__" : (undefined) 00:02:14.706 Fetching value of define "__AVX512VL__" : (undefined) 00:02:14.706 Fetching value of define "__PCLMUL__" : 1 00:02:14.706 Fetching value of define "__RDRND__" : 1 00:02:14.706 Fetching value of define "__RDSEED__" : 1 00:02:14.706 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.706 Fetching value of define "__znver1__" : (undefined) 00:02:14.706 Fetching value of define "__znver2__" : (undefined) 00:02:14.706 Fetching value of define "__znver3__" : (undefined) 00:02:14.706 Fetching value of define "__znver4__" : (undefined) 00:02:14.706 Library asan found: YES 00:02:14.706 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.706 Message: lib/log: Defining dependency "log" 00:02:14.706 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.706 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.706 Library rt found: YES 00:02:14.706 Checking for function "getentropy" : NO 00:02:14.706 Message: lib/eal: Defining dependency "eal" 00:02:14.706 Message: lib/ring: Defining dependency "ring" 00:02:14.706 Message: lib/rcu: Defining dependency "rcu" 00:02:14.706 Message: lib/mempool: Defining dependency "mempool" 00:02:14.706 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.706 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.706 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.706 Compiler for C supports arguments -mpclmul: YES 00:02:14.706 Compiler for C supports arguments -maes: YES 00:02:14.706 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.706 Compiler for C supports arguments -mavx512bw: YES 00:02:14.706 Compiler for C supports arguments -mavx512dq: YES 00:02:14.706 Compiler for C supports arguments -mavx512vl: YES 00:02:14.706 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.706 Compiler for C supports arguments -mavx2: YES 00:02:14.706 Compiler for C supports arguments -mavx: YES 00:02:14.706 Message: lib/net: Defining dependency "net" 00:02:14.706 Message: lib/meter: Defining dependency "meter" 00:02:14.706 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.706 Message: lib/pci: Defining dependency "pci" 00:02:14.706 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.706 Message: lib/hash: Defining dependency "hash" 00:02:14.706 Message: lib/timer: Defining dependency "timer" 00:02:14.706 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.706 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.706 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.706 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.706 Message: lib/power: Defining dependency "power" 00:02:14.706 Message: lib/reorder: Defining dependency "reorder" 00:02:14.706 Message: lib/security: Defining dependency "security" 00:02:14.706 Has header "linux/userfaultfd.h" : YES 00:02:14.706 Has header "linux/vduse.h" : NO 00:02:14.706 Message: lib/vhost: Defining dependency "vhost" 00:02:14.706 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.706 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.706 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.706 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.706 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.706 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.706 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.706 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.706 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.706 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.706 Program doxygen found: YES (/usr/bin/doxygen) 00:02:14.706 Configuring doxy-api-html.conf using configuration 00:02:14.706 Configuring doxy-api-man.conf using configuration 00:02:14.706 Program mandb found: YES (/usr/bin/mandb) 00:02:14.706 Program sphinx-build found: NO 00:02:14.706 Configuring rte_build_config.h using configuration 00:02:14.706 Message: 00:02:14.706 ================= 00:02:14.706 Applications Enabled 00:02:14.706 ================= 00:02:14.706 00:02:14.706 apps: 00:02:14.706 00:02:14.706 00:02:14.706 Message: 00:02:14.706 ================= 00:02:14.706 Libraries Enabled 00:02:14.706 ================= 00:02:14.706 00:02:14.706 libs: 00:02:14.706 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.706 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.706 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.706 00:02:14.706 Message: 00:02:14.706 =============== 00:02:14.706 Drivers Enabled 00:02:14.706 =============== 00:02:14.706 00:02:14.706 common: 00:02:14.706 00:02:14.706 bus: 00:02:14.706 pci, vdev, 00:02:14.706 mempool: 00:02:14.706 ring, 00:02:14.706 dma: 00:02:14.706 00:02:14.706 net: 00:02:14.706 00:02:14.706 crypto: 00:02:14.706 00:02:14.706 compress: 00:02:14.706 00:02:14.706 vdpa: 00:02:14.706 00:02:14.706 00:02:14.706 Message: 00:02:14.706 ================= 00:02:14.706 Content Skipped 00:02:14.706 ================= 00:02:14.706 00:02:14.706 apps: 00:02:14.706 dumpcap: explicitly disabled via build config 00:02:14.706 graph: explicitly disabled via build config 00:02:14.706 pdump: explicitly disabled via build config 00:02:14.706 proc-info: explicitly disabled via build config 00:02:14.706 test-acl: explicitly disabled via build config 00:02:14.706 test-bbdev: explicitly disabled via build config 00:02:14.706 test-cmdline: explicitly disabled via build config 00:02:14.706 test-compress-perf: explicitly disabled via build config 00:02:14.706 test-crypto-perf: explicitly disabled via build config 00:02:14.706 test-dma-perf: explicitly disabled via build config 00:02:14.706 test-eventdev: explicitly disabled via build config 00:02:14.706 test-fib: explicitly disabled via build config 00:02:14.707 test-flow-perf: explicitly disabled via build config 00:02:14.707 test-gpudev: explicitly disabled via build config 00:02:14.707 test-mldev: explicitly disabled via build config 00:02:14.707 test-pipeline: explicitly disabled via build config 00:02:14.707 test-pmd: explicitly disabled via build config 00:02:14.707 test-regex: explicitly disabled via build config 00:02:14.707 test-sad: explicitly disabled via build config 00:02:14.707 test-security-perf: explicitly disabled via build config 00:02:14.707 00:02:14.707 libs: 00:02:14.707 argparse: explicitly disabled via build config 00:02:14.707 metrics: explicitly disabled via build config 00:02:14.707 acl: explicitly disabled via build config 00:02:14.707 bbdev: explicitly disabled via build config 00:02:14.707 bitratestats: explicitly disabled via build config 00:02:14.707 bpf: explicitly disabled via build config 00:02:14.707 cfgfile: explicitly disabled via build config 00:02:14.707 distributor: explicitly disabled via build config 00:02:14.707 efd: explicitly disabled via build config 00:02:14.707 eventdev: explicitly disabled via build config 00:02:14.707 dispatcher: explicitly disabled via build config 00:02:14.707 gpudev: explicitly disabled via build config 00:02:14.707 gro: explicitly disabled via build config 00:02:14.707 gso: explicitly disabled via build config 00:02:14.707 ip_frag: explicitly disabled via build config 00:02:14.707 jobstats: explicitly disabled via build config 00:02:14.707 latencystats: explicitly disabled via build config 00:02:14.707 lpm: explicitly disabled via build config 00:02:14.707 member: explicitly disabled via build config 00:02:14.707 pcapng: explicitly disabled via build config 00:02:14.707 rawdev: explicitly disabled via build config 00:02:14.707 regexdev: explicitly disabled via build config 00:02:14.707 mldev: explicitly disabled via build config 00:02:14.707 rib: explicitly disabled via build config 00:02:14.707 sched: explicitly disabled via build config 00:02:14.707 stack: explicitly disabled via build config 00:02:14.707 ipsec: explicitly disabled via build config 00:02:14.707 pdcp: explicitly disabled via build config 00:02:14.707 fib: explicitly disabled via build config 00:02:14.707 port: explicitly disabled via build config 00:02:14.707 pdump: explicitly disabled via build config 00:02:14.707 table: explicitly disabled via build config 00:02:14.707 pipeline: explicitly disabled via build config 00:02:14.707 graph: explicitly disabled via build config 00:02:14.707 node: explicitly disabled via build config 00:02:14.707 00:02:14.707 drivers: 00:02:14.707 common/cpt: not in enabled drivers build config 00:02:14.707 common/dpaax: not in enabled drivers build config 00:02:14.707 common/iavf: not in enabled drivers build config 00:02:14.707 common/idpf: not in enabled drivers build config 00:02:14.707 common/ionic: not in enabled drivers build config 00:02:14.707 common/mvep: not in enabled drivers build config 00:02:14.707 common/octeontx: not in enabled drivers build config 00:02:14.707 bus/auxiliary: not in enabled drivers build config 00:02:14.707 bus/cdx: not in enabled drivers build config 00:02:14.707 bus/dpaa: not in enabled drivers build config 00:02:14.707 bus/fslmc: not in enabled drivers build config 00:02:14.707 bus/ifpga: not in enabled drivers build config 00:02:14.707 bus/platform: not in enabled drivers build config 00:02:14.707 bus/uacce: not in enabled drivers build config 00:02:14.707 bus/vmbus: not in enabled drivers build config 00:02:14.707 common/cnxk: not in enabled drivers build config 00:02:14.707 common/mlx5: not in enabled drivers build config 00:02:14.707 common/nfp: not in enabled drivers build config 00:02:14.707 common/nitrox: not in enabled drivers build config 00:02:14.707 common/qat: not in enabled drivers build config 00:02:14.707 common/sfc_efx: not in enabled drivers build config 00:02:14.707 mempool/bucket: not in enabled drivers build config 00:02:14.707 mempool/cnxk: not in enabled drivers build config 00:02:14.707 mempool/dpaa: not in enabled drivers build config 00:02:14.707 mempool/dpaa2: not in enabled drivers build config 00:02:14.707 mempool/octeontx: not in enabled drivers build config 00:02:14.707 mempool/stack: not in enabled drivers build config 00:02:14.707 dma/cnxk: not in enabled drivers build config 00:02:14.707 dma/dpaa: not in enabled drivers build config 00:02:14.707 dma/dpaa2: not in enabled drivers build config 00:02:14.707 dma/hisilicon: not in enabled drivers build config 00:02:14.707 dma/idxd: not in enabled drivers build config 00:02:14.707 dma/ioat: not in enabled drivers build config 00:02:14.707 dma/skeleton: not in enabled drivers build config 00:02:14.707 net/af_packet: not in enabled drivers build config 00:02:14.707 net/af_xdp: not in enabled drivers build config 00:02:14.707 net/ark: not in enabled drivers build config 00:02:14.707 net/atlantic: not in enabled drivers build config 00:02:14.707 net/avp: not in enabled drivers build config 00:02:14.707 net/axgbe: not in enabled drivers build config 00:02:14.707 net/bnx2x: not in enabled drivers build config 00:02:14.707 net/bnxt: not in enabled drivers build config 00:02:14.707 net/bonding: not in enabled drivers build config 00:02:14.707 net/cnxk: not in enabled drivers build config 00:02:14.707 net/cpfl: not in enabled drivers build config 00:02:14.707 net/cxgbe: not in enabled drivers build config 00:02:14.707 net/dpaa: not in enabled drivers build config 00:02:14.707 net/dpaa2: not in enabled drivers build config 00:02:14.707 net/e1000: not in enabled drivers build config 00:02:14.707 net/ena: not in enabled drivers build config 00:02:14.707 net/enetc: not in enabled drivers build config 00:02:14.707 net/enetfec: not in enabled drivers build config 00:02:14.707 net/enic: not in enabled drivers build config 00:02:14.707 net/failsafe: not in enabled drivers build config 00:02:14.707 net/fm10k: not in enabled drivers build config 00:02:14.707 net/gve: not in enabled drivers build config 00:02:14.707 net/hinic: not in enabled drivers build config 00:02:14.707 net/hns3: not in enabled drivers build config 00:02:14.707 net/i40e: not in enabled drivers build config 00:02:14.707 net/iavf: not in enabled drivers build config 00:02:14.707 net/ice: not in enabled drivers build config 00:02:14.707 net/idpf: not in enabled drivers build config 00:02:14.707 net/igc: not in enabled drivers build config 00:02:14.707 net/ionic: not in enabled drivers build config 00:02:14.707 net/ipn3ke: not in enabled drivers build config 00:02:14.707 net/ixgbe: not in enabled drivers build config 00:02:14.707 net/mana: not in enabled drivers build config 00:02:14.707 net/memif: not in enabled drivers build config 00:02:14.707 net/mlx4: not in enabled drivers build config 00:02:14.707 net/mlx5: not in enabled drivers build config 00:02:14.707 net/mvneta: not in enabled drivers build config 00:02:14.707 net/mvpp2: not in enabled drivers build config 00:02:14.707 net/netvsc: not in enabled drivers build config 00:02:14.707 net/nfb: not in enabled drivers build config 00:02:14.707 net/nfp: not in enabled drivers build config 00:02:14.707 net/ngbe: not in enabled drivers build config 00:02:14.707 net/null: not in enabled drivers build config 00:02:14.707 net/octeontx: not in enabled drivers build config 00:02:14.707 net/octeon_ep: not in enabled drivers build config 00:02:14.707 net/pcap: not in enabled drivers build config 00:02:14.707 net/pfe: not in enabled drivers build config 00:02:14.707 net/qede: not in enabled drivers build config 00:02:14.707 net/ring: not in enabled drivers build config 00:02:14.707 net/sfc: not in enabled drivers build config 00:02:14.707 net/softnic: not in enabled drivers build config 00:02:14.707 net/tap: not in enabled drivers build config 00:02:14.707 net/thunderx: not in enabled drivers build config 00:02:14.707 net/txgbe: not in enabled drivers build config 00:02:14.707 net/vdev_netvsc: not in enabled drivers build config 00:02:14.707 net/vhost: not in enabled drivers build config 00:02:14.707 net/virtio: not in enabled drivers build config 00:02:14.707 net/vmxnet3: not in enabled drivers build config 00:02:14.707 raw/*: missing internal dependency, "rawdev" 00:02:14.707 crypto/armv8: not in enabled drivers build config 00:02:14.707 crypto/bcmfs: not in enabled drivers build config 00:02:14.707 crypto/caam_jr: not in enabled drivers build config 00:02:14.707 crypto/ccp: not in enabled drivers build config 00:02:14.707 crypto/cnxk: not in enabled drivers build config 00:02:14.707 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.707 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.707 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.707 crypto/mlx5: not in enabled drivers build config 00:02:14.707 crypto/mvsam: not in enabled drivers build config 00:02:14.707 crypto/nitrox: not in enabled drivers build config 00:02:14.707 crypto/null: not in enabled drivers build config 00:02:14.707 crypto/octeontx: not in enabled drivers build config 00:02:14.707 crypto/openssl: not in enabled drivers build config 00:02:14.707 crypto/scheduler: not in enabled drivers build config 00:02:14.707 crypto/uadk: not in enabled drivers build config 00:02:14.707 crypto/virtio: not in enabled drivers build config 00:02:14.707 compress/isal: not in enabled drivers build config 00:02:14.707 compress/mlx5: not in enabled drivers build config 00:02:14.707 compress/nitrox: not in enabled drivers build config 00:02:14.707 compress/octeontx: not in enabled drivers build config 00:02:14.707 compress/zlib: not in enabled drivers build config 00:02:14.707 regex/*: missing internal dependency, "regexdev" 00:02:14.707 ml/*: missing internal dependency, "mldev" 00:02:14.707 vdpa/ifc: not in enabled drivers build config 00:02:14.707 vdpa/mlx5: not in enabled drivers build config 00:02:14.707 vdpa/nfp: not in enabled drivers build config 00:02:14.707 vdpa/sfc: not in enabled drivers build config 00:02:14.707 event/*: missing internal dependency, "eventdev" 00:02:14.707 baseband/*: missing internal dependency, "bbdev" 00:02:14.707 gpu/*: missing internal dependency, "gpudev" 00:02:14.707 00:02:14.707 00:02:14.707 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.965 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:14.965 Build targets in project: 85 00:02:14.965 00:02:14.965 DPDK 24.03.0 00:02:14.965 00:02:14.965 User defined options 00:02:14.965 buildtype : debug 00:02:14.965 default_library : static 00:02:14.965 libdir : lib 00:02:14.965 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:14.965 b_sanitize : address 00:02:14.965 c_args : -fPIC -Werror 00:02:14.965 c_link_args : 00:02:14.965 cpu_instruction_set: native 00:02:14.966 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:02:14.966 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,argparse,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:02:14.966 enable_docs : false 00:02:14.966 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:14.966 enable_kmods : false 00:02:14.966 max_lcores : 128 00:02:14.966 tests : false 00:02:14.966 00:02:14.966 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:15.223 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.223 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.223 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.223 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.480 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.480 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.480 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.480 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:15.480 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.737 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:15.737 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:15.737 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:15.737 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:15.737 [5/267] Linking static target lib/librte_kvargs.a 00:02:15.737 [6/267] Linking static target lib/librte_log.a 00:02:15.995 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:15.995 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.995 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.995 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.995 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.995 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:15.995 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.995 [12/267] Linking static target lib/librte_telemetry.a 00:02:15.995 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:15.995 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:15.995 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.995 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.995 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.995 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.253 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.253 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.253 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.253 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.253 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.253 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:16.253 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:16.253 [22/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.253 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.253 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.511 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:16.511 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:16.511 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:16.511 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.768 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.768 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.768 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.768 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.768 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.768 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.768 [35/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.768 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.768 [37/267] Linking target lib/librte_log.so.24.1 00:02:16.768 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.768 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.768 [40/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.768 [41/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:17.026 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:17.026 [43/267] Linking target lib/librte_kvargs.so.24.1 00:02:17.026 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:17.026 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:17.026 [46/267] Linking target lib/librte_telemetry.so.24.1 00:02:17.026 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:17.026 [48/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:17.026 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:17.026 [50/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:17.026 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.026 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:17.284 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:17.284 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:17.284 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:17.284 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:17.284 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:17.284 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:17.284 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:17.284 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:17.284 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:17.284 [62/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:17.284 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.541 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:17.541 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.541 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.541 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:17.541 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:17.541 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:17.799 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:17.799 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:17.799 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:17.799 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:17.799 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:17.799 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.799 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:17.799 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.799 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:18.056 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:18.056 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:18.056 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:18.056 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:18.056 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:18.056 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:18.056 [85/267] Linking static target lib/librte_ring.a 00:02:18.056 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:18.056 [87/267] Linking static target lib/librte_eal.a 00:02:18.314 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:18.314 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:18.314 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:18.314 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:18.314 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:18.314 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:18.314 [94/267] Linking static target lib/librte_mempool.a 00:02:18.314 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:18.314 [96/267] Linking static target lib/librte_rcu.a 00:02:18.571 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.571 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:18.571 [99/267] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:18.571 [100/267] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:18.571 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:18.829 [102/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.829 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.829 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:18.829 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.829 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.829 [107/267] Linking static target lib/librte_net.a 00:02:18.829 [108/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.829 [109/267] Linking static target lib/librte_meter.a 00:02:19.087 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:19.087 [111/267] Linking static target lib/librte_mbuf.a 00:02:19.087 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:19.087 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:19.087 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:19.087 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.087 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.087 [117/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.344 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.600 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.600 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:19.600 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.600 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.600 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.858 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.858 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.858 [126/267] Linking static target lib/librte_pci.a 00:02:19.858 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.115 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.115 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.115 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:20.115 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:20.115 [132/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.115 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:20.115 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:20.115 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:20.115 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.115 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.115 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.115 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.115 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.115 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.115 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.115 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.373 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:20.373 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.373 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.373 [147/267] Linking static target lib/librte_cmdline.a 00:02:20.373 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:20.631 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.631 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.631 [151/267] Linking static target lib/librte_timer.a 00:02:20.631 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.631 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.631 [154/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.889 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.889 [156/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.146 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.146 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:21.147 [159/267] Linking static target lib/librte_compressdev.a 00:02:21.147 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.147 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.147 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.147 [163/267] Linking static target lib/librte_hash.a 00:02:21.403 [164/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.403 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.403 [166/267] Linking static target lib/librte_dmadev.a 00:02:21.403 [167/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.403 [168/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.403 [169/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.403 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.403 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.403 [172/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:21.403 [173/267] Linking static target lib/librte_ethdev.a 00:02:21.661 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.661 [175/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:21.661 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:21.918 [177/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:21.918 [178/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:21.918 [179/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.918 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:21.918 [181/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.918 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.177 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.177 [184/267] Linking static target lib/librte_power.a 00:02:22.177 [185/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.177 [186/267] Linking static target lib/librte_cryptodev.a 00:02:22.177 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.435 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.435 [189/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.435 [190/267] Linking static target lib/librte_reorder.a 00:02:22.435 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.435 [192/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.435 [193/267] Linking static target lib/librte_security.a 00:02:22.693 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.693 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:22.951 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.951 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.951 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:23.209 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.209 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:23.209 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.209 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:23.472 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:23.472 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:23.472 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:23.740 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:23.740 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:23.740 [208/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:23.740 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:23.740 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.740 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:23.740 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:23.740 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.740 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:23.740 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:23.740 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.740 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.740 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:23.998 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:23.998 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:23.998 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.257 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.257 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.257 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.257 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:24.257 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.159 [227/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.159 [228/267] Linking target lib/librte_eal.so.24.1 00:02:26.159 [229/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:26.159 [230/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:26.159 [231/267] Linking target lib/librte_pci.so.24.1 00:02:26.159 [232/267] Linking target lib/librte_ring.so.24.1 00:02:26.159 [233/267] Linking target lib/librte_meter.so.24.1 00:02:26.159 [234/267] Linking target lib/librte_timer.so.24.1 00:02:26.159 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:26.159 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:26.159 [237/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:26.159 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:26.159 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:26.159 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:26.159 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:26.159 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:26.159 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:26.159 [244/267] Linking target lib/librte_rcu.so.24.1 00:02:26.417 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:26.417 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.417 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:26.417 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.418 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.676 [250/267] Linking target lib/librte_net.so.24.1 00:02:26.676 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:26.676 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:26.676 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:26.676 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.676 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.676 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:26.676 [257/267] Linking target lib/librte_hash.so.24.1 00:02:26.676 [258/267] Linking target lib/librte_security.so.24.1 00:02:26.933 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:27.869 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.869 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:27.869 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:28.128 [263/267] Linking target lib/librte_power.so.24.1 00:02:30.026 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.026 [265/267] Linking static target lib/librte_vhost.a 00:02:31.402 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.402 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:31.402 INFO: autodetecting backend as ninja 00:02:31.402 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:32.778 CC lib/log/log.o 00:02:32.778 CC lib/log/log_flags.o 00:02:32.778 CC lib/log/log_deprecated.o 00:02:32.778 CC lib/ut_mock/mock.o 00:02:32.778 CC lib/ut/ut.o 00:02:32.778 LIB libspdk_ut.a 00:02:32.778 LIB libspdk_log.a 00:02:32.778 LIB libspdk_ut_mock.a 00:02:33.037 CXX lib/trace_parser/trace.o 00:02:33.037 CC lib/util/base64.o 00:02:33.037 CC lib/util/cpuset.o 00:02:33.037 CC lib/ioat/ioat.o 00:02:33.037 CC lib/util/crc16.o 00:02:33.037 CC lib/util/bit_array.o 00:02:33.037 CC lib/util/crc32c.o 00:02:33.037 CC lib/util/crc32.o 00:02:33.037 CC lib/dma/dma.o 00:02:33.037 CC lib/vfio_user/host/vfio_user_pci.o 00:02:33.037 CC lib/vfio_user/host/vfio_user.o 00:02:33.037 CC lib/util/crc32_ieee.o 00:02:33.037 CC lib/util/crc64.o 00:02:33.037 CC lib/util/dif.o 00:02:33.295 LIB libspdk_dma.a 00:02:33.295 CC lib/util/fd.o 00:02:33.295 CC lib/util/file.o 00:02:33.295 CC lib/util/hexlify.o 00:02:33.295 CC lib/util/iov.o 00:02:33.295 CC lib/util/math.o 00:02:33.295 CC lib/util/pipe.o 00:02:33.295 LIB libspdk_ioat.a 00:02:33.295 CC lib/util/strerror_tls.o 00:02:33.295 CC lib/util/string.o 00:02:33.295 LIB libspdk_vfio_user.a 00:02:33.295 CC lib/util/uuid.o 00:02:33.558 CC lib/util/fd_group.o 00:02:33.558 CC lib/util/xor.o 00:02:33.558 CC lib/util/zipf.o 00:02:33.823 LIB libspdk_util.a 00:02:34.082 CC lib/json/json_parse.o 00:02:34.082 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:34.082 CC lib/rdma_provider/common.o 00:02:34.082 CC lib/conf/conf.o 00:02:34.082 CC lib/json/json_util.o 00:02:34.082 CC lib/vmd/vmd.o 00:02:34.082 CC lib/rdma_utils/rdma_utils.o 00:02:34.082 CC lib/idxd/idxd.o 00:02:34.082 CC lib/env_dpdk/env.o 00:02:34.082 LIB libspdk_trace_parser.a 00:02:34.082 CC lib/idxd/idxd_user.o 00:02:34.342 LIB libspdk_conf.a 00:02:34.342 CC lib/json/json_write.o 00:02:34.342 LIB libspdk_rdma_provider.a 00:02:34.342 CC lib/vmd/led.o 00:02:34.342 CC lib/env_dpdk/memory.o 00:02:34.342 CC lib/env_dpdk/pci.o 00:02:34.342 CC lib/env_dpdk/init.o 00:02:34.342 LIB libspdk_rdma_utils.a 00:02:34.342 CC lib/env_dpdk/threads.o 00:02:34.342 CC lib/env_dpdk/pci_ioat.o 00:02:34.342 CC lib/env_dpdk/pci_virtio.o 00:02:34.601 CC lib/env_dpdk/pci_vmd.o 00:02:34.601 LIB libspdk_json.a 00:02:34.601 CC lib/env_dpdk/pci_idxd.o 00:02:34.601 CC lib/env_dpdk/pci_event.o 00:02:34.601 CC lib/env_dpdk/sigbus_handler.o 00:02:34.601 CC lib/env_dpdk/pci_dpdk.o 00:02:34.601 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:34.601 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:34.858 LIB libspdk_idxd.a 00:02:34.859 LIB libspdk_vmd.a 00:02:34.859 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:34.859 CC lib/jsonrpc/jsonrpc_server.o 00:02:34.859 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:34.859 CC lib/jsonrpc/jsonrpc_client.o 00:02:35.117 LIB libspdk_jsonrpc.a 00:02:35.375 CC lib/rpc/rpc.o 00:02:35.633 LIB libspdk_rpc.a 00:02:35.633 LIB libspdk_env_dpdk.a 00:02:35.633 CC lib/trace/trace.o 00:02:35.633 CC lib/trace/trace_flags.o 00:02:35.633 CC lib/trace/trace_rpc.o 00:02:35.633 CC lib/keyring/keyring.o 00:02:35.633 CC lib/notify/notify_rpc.o 00:02:35.633 CC lib/notify/notify.o 00:02:35.633 CC lib/keyring/keyring_rpc.o 00:02:35.892 LIB libspdk_notify.a 00:02:35.892 LIB libspdk_keyring.a 00:02:35.892 LIB libspdk_trace.a 00:02:36.150 CC lib/thread/thread.o 00:02:36.150 CC lib/thread/iobuf.o 00:02:36.150 CC lib/sock/sock.o 00:02:36.150 CC lib/sock/sock_rpc.o 00:02:36.717 LIB libspdk_sock.a 00:02:36.717 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:36.717 CC lib/nvme/nvme_ctrlr.o 00:02:36.717 CC lib/nvme/nvme_ns_cmd.o 00:02:36.717 CC lib/nvme/nvme_fabric.o 00:02:36.717 CC lib/nvme/nvme_pcie_common.o 00:02:36.717 CC lib/nvme/nvme_ns.o 00:02:36.717 CC lib/nvme/nvme_pcie.o 00:02:36.717 CC lib/nvme/nvme_qpair.o 00:02:36.717 CC lib/nvme/nvme.o 00:02:37.650 CC lib/nvme/nvme_quirks.o 00:02:37.650 CC lib/nvme/nvme_transport.o 00:02:37.650 CC lib/nvme/nvme_discovery.o 00:02:37.650 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:37.650 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:37.650 CC lib/nvme/nvme_tcp.o 00:02:37.650 CC lib/nvme/nvme_opal.o 00:02:37.907 CC lib/nvme/nvme_io_msg.o 00:02:37.907 CC lib/nvme/nvme_poll_group.o 00:02:37.907 CC lib/nvme/nvme_zns.o 00:02:37.907 CC lib/nvme/nvme_stubs.o 00:02:38.164 CC lib/nvme/nvme_auth.o 00:02:38.164 LIB libspdk_thread.a 00:02:38.164 CC lib/nvme/nvme_cuse.o 00:02:38.164 CC lib/nvme/nvme_rdma.o 00:02:38.424 CC lib/accel/accel.o 00:02:38.424 CC lib/blob/blobstore.o 00:02:38.424 CC lib/accel/accel_rpc.o 00:02:38.424 CC lib/accel/accel_sw.o 00:02:38.424 CC lib/init/json_config.o 00:02:38.424 CC lib/virtio/virtio.o 00:02:38.691 CC lib/blob/request.o 00:02:38.691 CC lib/init/subsystem.o 00:02:38.691 CC lib/init/subsystem_rpc.o 00:02:38.948 CC lib/virtio/virtio_vhost_user.o 00:02:38.948 CC lib/virtio/virtio_vfio_user.o 00:02:38.948 CC lib/virtio/virtio_pci.o 00:02:38.948 CC lib/init/rpc.o 00:02:39.205 CC lib/blob/zeroes.o 00:02:39.205 LIB libspdk_init.a 00:02:39.205 CC lib/blob/blob_bs_dev.o 00:02:39.205 LIB libspdk_virtio.a 00:02:39.462 CC lib/event/app.o 00:02:39.462 CC lib/event/reactor.o 00:02:39.462 CC lib/event/log_rpc.o 00:02:39.462 CC lib/event/app_rpc.o 00:02:39.462 CC lib/event/scheduler_static.o 00:02:39.462 LIB libspdk_nvme.a 00:02:39.462 LIB libspdk_accel.a 00:02:39.720 CC lib/bdev/bdev_zone.o 00:02:39.720 CC lib/bdev/bdev.o 00:02:39.720 CC lib/bdev/bdev_rpc.o 00:02:39.720 CC lib/bdev/scsi_nvme.o 00:02:39.720 CC lib/bdev/part.o 00:02:39.978 LIB libspdk_event.a 00:02:42.544 LIB libspdk_blob.a 00:02:42.544 CC lib/lvol/lvol.o 00:02:42.544 CC lib/blobfs/tree.o 00:02:42.544 CC lib/blobfs/blobfs.o 00:02:43.480 LIB libspdk_bdev.a 00:02:43.480 CC lib/nvmf/ctrlr.o 00:02:43.480 CC lib/nvmf/ctrlr_discovery.o 00:02:43.480 CC lib/nbd/nbd.o 00:02:43.480 CC lib/ftl/ftl_init.o 00:02:43.480 CC lib/nbd/nbd_rpc.o 00:02:43.480 CC lib/ftl/ftl_core.o 00:02:43.480 CC lib/nvmf/ctrlr_bdev.o 00:02:43.480 CC lib/scsi/dev.o 00:02:43.738 CC lib/scsi/lun.o 00:02:43.738 LIB libspdk_blobfs.a 00:02:43.738 CC lib/ftl/ftl_layout.o 00:02:43.738 CC lib/ftl/ftl_debug.o 00:02:43.738 LIB libspdk_lvol.a 00:02:43.738 CC lib/scsi/port.o 00:02:43.738 CC lib/ftl/ftl_io.o 00:02:43.997 LIB libspdk_nbd.a 00:02:43.997 CC lib/ftl/ftl_sb.o 00:02:43.997 CC lib/nvmf/subsystem.o 00:02:43.997 CC lib/ftl/ftl_l2p.o 00:02:43.997 CC lib/scsi/scsi.o 00:02:43.997 CC lib/scsi/scsi_bdev.o 00:02:43.997 CC lib/scsi/scsi_pr.o 00:02:43.997 CC lib/scsi/scsi_rpc.o 00:02:44.255 CC lib/ftl/ftl_l2p_flat.o 00:02:44.255 CC lib/scsi/task.o 00:02:44.255 CC lib/ftl/ftl_nv_cache.o 00:02:44.255 CC lib/ftl/ftl_band.o 00:02:44.255 CC lib/nvmf/nvmf.o 00:02:44.255 CC lib/nvmf/nvmf_rpc.o 00:02:44.255 CC lib/nvmf/transport.o 00:02:44.514 CC lib/nvmf/tcp.o 00:02:44.514 CC lib/nvmf/stubs.o 00:02:44.514 LIB libspdk_scsi.a 00:02:44.514 CC lib/nvmf/mdns_server.o 00:02:44.772 CC lib/nvmf/rdma.o 00:02:45.031 CC lib/iscsi/conn.o 00:02:45.031 CC lib/iscsi/init_grp.o 00:02:45.291 CC lib/iscsi/iscsi.o 00:02:45.291 CC lib/iscsi/md5.o 00:02:45.291 CC lib/iscsi/param.o 00:02:45.291 CC lib/iscsi/portal_grp.o 00:02:45.291 CC lib/iscsi/tgt_node.o 00:02:45.291 CC lib/iscsi/iscsi_subsystem.o 00:02:45.291 CC lib/ftl/ftl_band_ops.o 00:02:45.550 CC lib/iscsi/iscsi_rpc.o 00:02:45.550 CC lib/iscsi/task.o 00:02:45.550 CC lib/ftl/ftl_writer.o 00:02:45.809 CC lib/ftl/ftl_rq.o 00:02:45.809 CC lib/ftl/ftl_reloc.o 00:02:45.809 CC lib/ftl/ftl_l2p_cache.o 00:02:45.809 CC lib/ftl/ftl_p2l.o 00:02:45.809 CC lib/ftl/mngt/ftl_mngt.o 00:02:45.809 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:46.069 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:46.069 CC lib/vhost/vhost.o 00:02:46.069 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:46.328 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:46.328 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:46.328 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:46.328 CC lib/vhost/vhost_rpc.o 00:02:46.328 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:46.328 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:46.586 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:46.586 CC lib/vhost/vhost_scsi.o 00:02:46.586 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:46.586 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:46.586 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:46.586 CC lib/ftl/utils/ftl_conf.o 00:02:46.586 CC lib/vhost/vhost_blk.o 00:02:46.586 CC lib/ftl/utils/ftl_md.o 00:02:46.844 CC lib/vhost/rte_vhost_user.o 00:02:46.844 CC lib/ftl/utils/ftl_mempool.o 00:02:46.844 CC lib/ftl/utils/ftl_bitmap.o 00:02:46.844 LIB libspdk_iscsi.a 00:02:46.844 CC lib/ftl/utils/ftl_property.o 00:02:46.844 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:47.102 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:47.102 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:47.102 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:47.102 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:47.102 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:47.102 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:47.102 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:47.360 LIB libspdk_nvmf.a 00:02:47.360 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:47.360 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:47.360 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:47.360 CC lib/ftl/base/ftl_base_dev.o 00:02:47.360 CC lib/ftl/base/ftl_base_bdev.o 00:02:47.360 CC lib/ftl/ftl_trace.o 00:02:47.618 LIB libspdk_ftl.a 00:02:47.877 LIB libspdk_vhost.a 00:02:48.135 CC module/env_dpdk/env_dpdk_rpc.o 00:02:48.135 CC module/accel/ioat/accel_ioat.o 00:02:48.135 CC module/accel/error/accel_error.o 00:02:48.135 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:48.135 CC module/sock/posix/posix.o 00:02:48.135 CC module/accel/dsa/accel_dsa.o 00:02:48.135 CC module/keyring/linux/keyring.o 00:02:48.135 CC module/keyring/file/keyring.o 00:02:48.135 CC module/blob/bdev/blob_bdev.o 00:02:48.135 CC module/accel/iaa/accel_iaa.o 00:02:48.135 LIB libspdk_env_dpdk_rpc.a 00:02:48.135 CC module/accel/dsa/accel_dsa_rpc.o 00:02:48.135 CC module/keyring/file/keyring_rpc.o 00:02:48.135 CC module/keyring/linux/keyring_rpc.o 00:02:48.393 CC module/accel/error/accel_error_rpc.o 00:02:48.393 CC module/accel/ioat/accel_ioat_rpc.o 00:02:48.393 LIB libspdk_scheduler_dynamic.a 00:02:48.393 CC module/accel/iaa/accel_iaa_rpc.o 00:02:48.393 LIB libspdk_accel_dsa.a 00:02:48.393 LIB libspdk_keyring_file.a 00:02:48.393 LIB libspdk_keyring_linux.a 00:02:48.393 LIB libspdk_blob_bdev.a 00:02:48.393 LIB libspdk_accel_ioat.a 00:02:48.393 LIB libspdk_accel_error.a 00:02:48.393 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:48.393 CC module/scheduler/gscheduler/gscheduler.o 00:02:48.393 LIB libspdk_accel_iaa.a 00:02:48.650 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.650 CC module/bdev/error/vbdev_error.o 00:02:48.650 CC module/bdev/gpt/gpt.o 00:02:48.650 CC module/bdev/delay/vbdev_delay.o 00:02:48.650 CC module/blobfs/bdev/blobfs_bdev.o 00:02:48.650 LIB libspdk_scheduler_dpdk_governor.a 00:02:48.650 CC module/bdev/malloc/bdev_malloc.o 00:02:48.650 LIB libspdk_scheduler_gscheduler.a 00:02:48.650 CC module/bdev/null/bdev_null.o 00:02:48.650 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.650 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:48.908 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.908 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.908 CC module/bdev/null/bdev_null_rpc.o 00:02:48.908 CC module/bdev/error/vbdev_error_rpc.o 00:02:48.908 LIB libspdk_sock_posix.a 00:02:48.908 LIB libspdk_blobfs_bdev.a 00:02:48.908 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:48.908 CC module/bdev/nvme/bdev_nvme.o 00:02:49.165 LIB libspdk_bdev_delay.a 00:02:49.165 LIB libspdk_bdev_null.a 00:02:49.165 LIB libspdk_bdev_malloc.a 00:02:49.165 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:49.165 CC module/bdev/passthru/vbdev_passthru.o 00:02:49.165 LIB libspdk_bdev_error.a 00:02:49.165 LIB libspdk_bdev_gpt.a 00:02:49.165 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:49.165 CC module/bdev/raid/bdev_raid.o 00:02:49.165 CC module/bdev/raid/bdev_raid_rpc.o 00:02:49.165 CC module/bdev/raid/bdev_raid_sb.o 00:02:49.165 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:49.165 CC module/bdev/split/vbdev_split.o 00:02:49.423 CC module/bdev/split/vbdev_split_rpc.o 00:02:49.423 LIB libspdk_bdev_lvol.a 00:02:49.423 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:49.423 LIB libspdk_bdev_passthru.a 00:02:49.423 CC module/bdev/nvme/nvme_rpc.o 00:02:49.423 CC module/bdev/nvme/bdev_mdns_client.o 00:02:49.423 CC module/bdev/nvme/vbdev_opal.o 00:02:49.423 LIB libspdk_bdev_split.a 00:02:49.680 CC module/bdev/raid/raid0.o 00:02:49.680 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:49.680 LIB libspdk_bdev_zone_block.a 00:02:49.680 CC module/bdev/aio/bdev_aio.o 00:02:49.680 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:49.680 CC module/bdev/raid/raid1.o 00:02:49.938 CC module/bdev/raid/concat.o 00:02:49.938 CC module/bdev/iscsi/bdev_iscsi.o 00:02:49.938 CC module/bdev/ftl/bdev_ftl.o 00:02:49.938 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:49.938 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:49.938 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:49.938 CC module/bdev/aio/bdev_aio_rpc.o 00:02:49.938 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:50.247 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:50.247 CC module/bdev/raid/raid5f.o 00:02:50.247 LIB libspdk_bdev_ftl.a 00:02:50.247 LIB libspdk_bdev_aio.a 00:02:50.247 LIB libspdk_bdev_iscsi.a 00:02:50.515 LIB libspdk_bdev_virtio.a 00:02:50.773 LIB libspdk_bdev_raid.a 00:02:51.707 LIB libspdk_bdev_nvme.a 00:02:51.966 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.966 CC module/event/subsystems/keyring/keyring.o 00:02:51.966 CC module/event/subsystems/sock/sock.o 00:02:51.966 CC module/event/subsystems/vmd/vmd.o 00:02:51.966 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.966 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.966 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.966 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:52.225 LIB libspdk_event_keyring.a 00:02:52.225 LIB libspdk_event_sock.a 00:02:52.225 LIB libspdk_event_vhost_blk.a 00:02:52.225 LIB libspdk_event_scheduler.a 00:02:52.225 LIB libspdk_event_vmd.a 00:02:52.225 LIB libspdk_event_iobuf.a 00:02:52.485 CC module/event/subsystems/accel/accel.o 00:02:52.485 LIB libspdk_event_accel.a 00:02:52.744 CC module/event/subsystems/bdev/bdev.o 00:02:53.003 LIB libspdk_event_bdev.a 00:02:53.003 CC module/event/subsystems/nbd/nbd.o 00:02:53.003 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.003 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.003 CC module/event/subsystems/scsi/scsi.o 00:02:53.261 LIB libspdk_event_nbd.a 00:02:53.261 LIB libspdk_event_scsi.a 00:02:53.261 LIB libspdk_event_nvmf.a 00:02:53.520 CC module/event/subsystems/iscsi/iscsi.o 00:02:53.520 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:53.520 LIB libspdk_event_vhost_scsi.a 00:02:53.779 LIB libspdk_event_iscsi.a 00:02:53.779 CXX app/trace/trace.o 00:02:53.779 CC app/trace_record/trace_record.o 00:02:53.779 TEST_HEADER include/spdk/ioat.h 00:02:53.779 TEST_HEADER include/spdk/blobfs.h 00:02:53.779 TEST_HEADER include/spdk/notify.h 00:02:53.779 TEST_HEADER include/spdk/pipe.h 00:02:53.779 TEST_HEADER include/spdk/accel.h 00:02:54.038 TEST_HEADER include/spdk/file.h 00:02:54.038 TEST_HEADER include/spdk/version.h 00:02:54.038 TEST_HEADER include/spdk/trace_parser.h 00:02:54.038 TEST_HEADER include/spdk/opal_spec.h 00:02:54.038 TEST_HEADER include/spdk/uuid.h 00:02:54.038 TEST_HEADER include/spdk/likely.h 00:02:54.038 TEST_HEADER include/spdk/dif.h 00:02:54.038 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:54.038 TEST_HEADER include/spdk/keyring_module.h 00:02:54.038 CC examples/util/zipf/zipf.o 00:02:54.038 TEST_HEADER include/spdk/memory.h 00:02:54.038 CC examples/ioat/perf/perf.o 00:02:54.038 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:54.038 TEST_HEADER include/spdk/dma.h 00:02:54.038 TEST_HEADER include/spdk/nbd.h 00:02:54.038 TEST_HEADER include/spdk/conf.h 00:02:54.038 TEST_HEADER include/spdk/env_dpdk.h 00:02:54.038 TEST_HEADER include/spdk/nvmf_spec.h 00:02:54.038 TEST_HEADER include/spdk/iscsi_spec.h 00:02:54.038 CC test/thread/poller_perf/poller_perf.o 00:02:54.038 TEST_HEADER include/spdk/mmio.h 00:02:54.038 TEST_HEADER include/spdk/json.h 00:02:54.038 TEST_HEADER include/spdk/opal.h 00:02:54.038 CC app/nvmf_tgt/nvmf_main.o 00:02:54.038 TEST_HEADER include/spdk/bdev.h 00:02:54.038 TEST_HEADER include/spdk/keyring.h 00:02:54.038 TEST_HEADER include/spdk/base64.h 00:02:54.038 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:54.038 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:54.038 TEST_HEADER include/spdk/fd.h 00:02:54.038 TEST_HEADER include/spdk/barrier.h 00:02:54.038 TEST_HEADER include/spdk/scsi_spec.h 00:02:54.038 TEST_HEADER include/spdk/zipf.h 00:02:54.038 TEST_HEADER include/spdk/nvmf.h 00:02:54.038 TEST_HEADER include/spdk/queue.h 00:02:54.038 CC test/app/bdev_svc/bdev_svc.o 00:02:54.038 TEST_HEADER include/spdk/xor.h 00:02:54.038 TEST_HEADER include/spdk/cpuset.h 00:02:54.038 TEST_HEADER include/spdk/thread.h 00:02:54.038 CC test/dma/test_dma/test_dma.o 00:02:54.038 TEST_HEADER include/spdk/bdev_zone.h 00:02:54.038 TEST_HEADER include/spdk/fd_group.h 00:02:54.038 TEST_HEADER include/spdk/tree.h 00:02:54.038 TEST_HEADER include/spdk/blob_bdev.h 00:02:54.038 TEST_HEADER include/spdk/crc64.h 00:02:54.038 TEST_HEADER include/spdk/assert.h 00:02:54.038 TEST_HEADER include/spdk/nvme_spec.h 00:02:54.038 TEST_HEADER include/spdk/endian.h 00:02:54.038 TEST_HEADER include/spdk/pci_ids.h 00:02:54.038 TEST_HEADER include/spdk/log.h 00:02:54.038 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:54.038 TEST_HEADER include/spdk/ftl.h 00:02:54.038 TEST_HEADER include/spdk/config.h 00:02:54.038 TEST_HEADER include/spdk/vhost.h 00:02:54.038 TEST_HEADER include/spdk/bdev_module.h 00:02:54.038 TEST_HEADER include/spdk/nvme_intel.h 00:02:54.038 TEST_HEADER include/spdk/idxd_spec.h 00:02:54.038 TEST_HEADER include/spdk/crc16.h 00:02:54.038 TEST_HEADER include/spdk/nvme.h 00:02:54.038 TEST_HEADER include/spdk/stdinc.h 00:02:54.039 TEST_HEADER include/spdk/scsi.h 00:02:54.039 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:54.039 TEST_HEADER include/spdk/idxd.h 00:02:54.039 TEST_HEADER include/spdk/hexlify.h 00:02:54.039 TEST_HEADER include/spdk/reduce.h 00:02:54.039 TEST_HEADER include/spdk/crc32.h 00:02:54.039 LINK zipf 00:02:54.039 TEST_HEADER include/spdk/init.h 00:02:54.039 TEST_HEADER include/spdk/nvmf_transport.h 00:02:54.039 LINK poller_perf 00:02:54.039 TEST_HEADER include/spdk/nvme_zns.h 00:02:54.039 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:54.039 TEST_HEADER include/spdk/util.h 00:02:54.039 LINK interrupt_tgt 00:02:54.039 TEST_HEADER include/spdk/jsonrpc.h 00:02:54.298 TEST_HEADER include/spdk/env.h 00:02:54.298 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:54.298 LINK nvmf_tgt 00:02:54.298 TEST_HEADER include/spdk/lvol.h 00:02:54.298 TEST_HEADER include/spdk/histogram_data.h 00:02:54.298 TEST_HEADER include/spdk/event.h 00:02:54.298 TEST_HEADER include/spdk/trace.h 00:02:54.298 TEST_HEADER include/spdk/ioat_spec.h 00:02:54.298 LINK ioat_perf 00:02:54.298 LINK spdk_trace_record 00:02:54.298 TEST_HEADER include/spdk/string.h 00:02:54.298 TEST_HEADER include/spdk/ublk.h 00:02:54.298 TEST_HEADER include/spdk/bit_array.h 00:02:54.298 TEST_HEADER include/spdk/scheduler.h 00:02:54.298 TEST_HEADER include/spdk/blob.h 00:02:54.298 TEST_HEADER include/spdk/gpt_spec.h 00:02:54.298 TEST_HEADER include/spdk/sock.h 00:02:54.298 TEST_HEADER include/spdk/vmd.h 00:02:54.298 TEST_HEADER include/spdk/rpc.h 00:02:54.298 TEST_HEADER include/spdk/accel_module.h 00:02:54.298 TEST_HEADER include/spdk/bit_pool.h 00:02:54.298 CXX test/cpp_headers/ioat.o 00:02:54.298 LINK bdev_svc 00:02:54.298 LINK spdk_trace 00:02:54.557 CXX test/cpp_headers/blobfs.o 00:02:54.557 LINK test_dma 00:02:54.557 CXX test/cpp_headers/notify.o 00:02:54.816 CXX test/cpp_headers/pipe.o 00:02:54.816 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:54.816 CC examples/ioat/verify/verify.o 00:02:54.816 CC examples/sock/hello_world/hello_sock.o 00:02:54.816 CC examples/thread/thread/thread_ex.o 00:02:54.816 CC test/thread/lock/spdk_lock.o 00:02:55.075 CXX test/cpp_headers/accel.o 00:02:55.075 LINK verify 00:02:55.075 CXX test/cpp_headers/file.o 00:02:55.075 LINK hello_sock 00:02:55.333 LINK thread 00:02:55.333 LINK nvme_fuzz 00:02:55.333 CXX test/cpp_headers/version.o 00:02:55.333 CXX test/cpp_headers/trace_parser.o 00:02:55.591 CXX test/cpp_headers/opal_spec.o 00:02:55.591 CC test/app/histogram_perf/histogram_perf.o 00:02:55.591 CXX test/cpp_headers/uuid.o 00:02:55.850 CC test/app/jsoncat/jsoncat.o 00:02:55.850 LINK histogram_perf 00:02:55.850 CXX test/cpp_headers/likely.o 00:02:55.850 LINK jsoncat 00:02:55.850 CXX test/cpp_headers/dif.o 00:02:56.109 CXX test/cpp_headers/keyring_module.o 00:02:56.109 CC examples/vmd/lsvmd/lsvmd.o 00:02:56.368 CXX test/cpp_headers/memory.o 00:02:56.368 LINK lsvmd 00:02:56.368 CXX test/cpp_headers/vfio_user_pci.o 00:02:56.627 CXX test/cpp_headers/dma.o 00:02:56.627 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:56.627 CC app/iscsi_tgt/iscsi_tgt.o 00:02:56.627 CXX test/cpp_headers/nbd.o 00:02:56.627 CXX test/cpp_headers/conf.o 00:02:56.886 CC examples/idxd/perf/perf.o 00:02:56.886 LINK spdk_lock 00:02:56.886 CXX test/cpp_headers/env_dpdk.o 00:02:56.886 LINK iscsi_tgt 00:02:56.886 CXX test/cpp_headers/nvmf_spec.o 00:02:57.145 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:57.145 CXX test/cpp_headers/iscsi_spec.o 00:02:57.145 LINK idxd_perf 00:02:57.403 CC examples/vmd/led/led.o 00:02:57.403 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:57.403 CXX test/cpp_headers/mmio.o 00:02:57.403 LINK led 00:02:57.403 CXX test/cpp_headers/json.o 00:02:57.662 CXX test/cpp_headers/opal.o 00:02:57.662 CXX test/cpp_headers/bdev.o 00:02:57.662 CC app/spdk_tgt/spdk_tgt.o 00:02:57.662 CC test/env/mem_callbacks/mem_callbacks.o 00:02:57.920 LINK vhost_fuzz 00:02:57.920 CXX test/cpp_headers/keyring.o 00:02:57.920 LINK spdk_tgt 00:02:57.920 CC test/rpc_client/rpc_client_test.o 00:02:58.203 CXX test/cpp_headers/base64.o 00:02:58.203 CC test/nvme/aer/aer.o 00:02:58.203 CXX test/cpp_headers/blobfs_bdev.o 00:02:58.203 LINK mem_callbacks 00:02:58.203 LINK rpc_client_test 00:02:58.203 CC test/app/stub/stub.o 00:02:58.483 CXX test/cpp_headers/nvme_ocssd.o 00:02:58.483 LINK aer 00:02:58.483 LINK stub 00:02:58.483 CXX test/cpp_headers/fd.o 00:02:58.741 LINK iscsi_fuzz 00:02:58.741 CC test/env/vtophys/vtophys.o 00:02:58.741 CC examples/nvme/hello_world/hello_world.o 00:02:58.741 CXX test/cpp_headers/barrier.o 00:02:58.741 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:58.741 CC test/env/memory/memory_ut.o 00:02:58.741 LINK vtophys 00:02:58.998 CXX test/cpp_headers/scsi_spec.o 00:02:58.998 LINK env_dpdk_post_init 00:02:58.998 LINK hello_world 00:02:59.257 CXX test/cpp_headers/zipf.o 00:02:59.257 CXX test/cpp_headers/nvmf.o 00:02:59.515 CXX test/cpp_headers/queue.o 00:02:59.515 CC test/nvme/reset/reset.o 00:02:59.516 CC examples/nvme/reconnect/reconnect.o 00:02:59.516 CXX test/cpp_headers/xor.o 00:02:59.516 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:59.773 CXX test/cpp_headers/cpuset.o 00:02:59.773 LINK memory_ut 00:02:59.773 CXX test/cpp_headers/thread.o 00:02:59.773 LINK reset 00:03:00.031 CC examples/nvme/arbitration/arbitration.o 00:03:00.031 LINK reconnect 00:03:00.031 CXX test/cpp_headers/bdev_zone.o 00:03:00.031 CXX test/cpp_headers/fd_group.o 00:03:00.031 CC test/env/pci/pci_ut.o 00:03:00.288 LINK nvme_manage 00:03:00.288 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:00.288 CXX test/cpp_headers/tree.o 00:03:00.288 CC test/unit/lib/log/log.c/log_ut.o 00:03:00.288 LINK arbitration 00:03:00.288 CXX test/cpp_headers/blob_bdev.o 00:03:00.288 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:00.546 LINK histogram_ut 00:03:00.546 CXX test/cpp_headers/crc64.o 00:03:00.546 LINK pci_ut 00:03:00.546 LINK log_ut 00:03:00.803 CXX test/cpp_headers/assert.o 00:03:00.803 CC examples/nvme/hotplug/hotplug.o 00:03:01.061 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:01.061 CXX test/cpp_headers/nvme_spec.o 00:03:01.061 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:01.061 CC test/nvme/sgl/sgl.o 00:03:01.061 CC examples/nvme/abort/abort.o 00:03:01.061 LINK common_ut 00:03:01.318 CXX test/cpp_headers/endian.o 00:03:01.318 LINK cmb_copy 00:03:01.318 LINK hotplug 00:03:01.318 LINK pmr_persistence 00:03:01.576 CC test/nvme/e2edp/nvme_dp.o 00:03:01.576 CXX test/cpp_headers/pci_ids.o 00:03:01.576 LINK sgl 00:03:01.576 CC test/accel/dif/dif.o 00:03:01.576 LINK abort 00:03:02.141 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:02.141 CC app/spdk_lspci/spdk_lspci.o 00:03:02.141 CXX test/cpp_headers/log.o 00:03:02.141 LINK nvme_dp 00:03:02.141 LINK spdk_lspci 00:03:02.141 LINK dif 00:03:02.399 LINK base64_ut 00:03:02.399 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:02.399 CC test/nvme/overhead/overhead.o 00:03:02.399 CC test/nvme/err_injection/err_injection.o 00:03:02.399 CXX test/cpp_headers/ftl.o 00:03:02.399 CC test/nvme/startup/startup.o 00:03:02.657 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:02.657 LINK err_injection 00:03:02.657 LINK startup 00:03:02.657 CC app/spdk_nvme_perf/perf.o 00:03:02.657 CXX test/cpp_headers/config.o 00:03:02.657 LINK overhead 00:03:02.657 CXX test/cpp_headers/vhost.o 00:03:02.915 CC examples/accel/perf/accel_perf.o 00:03:02.916 CXX test/cpp_headers/bdev_module.o 00:03:03.174 LINK bit_array_ut 00:03:03.174 CXX test/cpp_headers/nvme_intel.o 00:03:03.174 CXX test/cpp_headers/idxd_spec.o 00:03:03.432 CC test/nvme/reserve/reserve.o 00:03:03.432 CXX test/cpp_headers/crc16.o 00:03:03.432 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:03.432 CC test/nvme/simple_copy/simple_copy.o 00:03:03.432 LINK accel_perf 00:03:03.689 LINK reserve 00:03:03.689 CXX test/cpp_headers/nvme.o 00:03:03.689 LINK spdk_nvme_perf 00:03:03.689 LINK cpuset_ut 00:03:03.689 LINK simple_copy 00:03:03.689 CC app/spdk_nvme_identify/identify.o 00:03:03.947 CC app/spdk_nvme_discover/discovery_aer.o 00:03:03.947 CXX test/cpp_headers/stdinc.o 00:03:03.947 CC app/spdk_top/spdk_top.o 00:03:03.947 CXX test/cpp_headers/scsi.o 00:03:03.947 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:03.947 LINK spdk_nvme_discover 00:03:04.204 LINK crc16_ut 00:03:04.204 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.462 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:04.462 CXX test/cpp_headers/idxd.o 00:03:04.462 LINK crc32_ieee_ut 00:03:04.720 CXX test/cpp_headers/hexlify.o 00:03:04.720 CC examples/blob/hello_world/hello_blob.o 00:03:04.720 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:04.720 CC app/vhost/vhost.o 00:03:04.720 LINK spdk_nvme_identify 00:03:04.720 CXX test/cpp_headers/reduce.o 00:03:04.979 CC test/nvme/connect_stress/connect_stress.o 00:03:04.979 LINK hello_blob 00:03:04.979 LINK crc32c_ut 00:03:04.979 CC examples/blob/cli/blobcli.o 00:03:04.979 LINK spdk_top 00:03:04.979 LINK vhost 00:03:04.979 CXX test/cpp_headers/crc32.o 00:03:04.979 LINK connect_stress 00:03:04.979 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:05.238 CXX test/cpp_headers/init.o 00:03:05.238 LINK crc64_ut 00:03:05.238 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:05.238 CXX test/cpp_headers/nvmf_transport.o 00:03:05.497 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:05.497 LINK blobcli 00:03:05.497 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:05.497 CXX test/cpp_headers/nvme_zns.o 00:03:05.755 CC test/unit/lib/util/math.c/math_ut.o 00:03:05.755 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.755 LINK iov_ut 00:03:05.755 LINK math_ut 00:03:06.024 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:06.024 CXX test/cpp_headers/util.o 00:03:06.024 CC app/spdk_dd/spdk_dd.o 00:03:06.024 CC app/fio/nvme/fio_plugin.o 00:03:06.299 CXX test/cpp_headers/jsonrpc.o 00:03:06.299 LINK dma_ut 00:03:06.299 CC test/nvme/boot_partition/boot_partition.o 00:03:06.299 CXX test/cpp_headers/env.o 00:03:06.299 CXX test/cpp_headers/nvmf_cmd.o 00:03:06.557 LINK boot_partition 00:03:06.557 LINK spdk_dd 00:03:06.557 CXX test/cpp_headers/lvol.o 00:03:06.557 LINK ioat_ut 00:03:06.557 LINK dif_ut 00:03:06.816 CXX test/cpp_headers/histogram_data.o 00:03:06.816 CC test/blobfs/mkfs/mkfs.o 00:03:06.816 CXX test/cpp_headers/event.o 00:03:06.816 LINK spdk_nvme 00:03:07.074 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:07.074 LINK mkfs 00:03:07.074 CXX test/cpp_headers/trace.o 00:03:07.074 CC test/unit/lib/util/string.c/string_ut.o 00:03:07.333 CXX test/cpp_headers/ioat_spec.o 00:03:07.333 LINK string_ut 00:03:07.333 CXX test/cpp_headers/string.o 00:03:07.591 CXX test/cpp_headers/ublk.o 00:03:07.591 CC test/nvme/compliance/nvme_compliance.o 00:03:07.591 CXX test/cpp_headers/bit_array.o 00:03:07.591 LINK pipe_ut 00:03:07.850 CXX test/cpp_headers/scheduler.o 00:03:07.850 CC test/nvme/fused_ordering/fused_ordering.o 00:03:07.850 CC test/event/event_perf/event_perf.o 00:03:07.850 CXX test/cpp_headers/blob.o 00:03:07.850 LINK nvme_compliance 00:03:07.850 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:07.850 LINK event_perf 00:03:07.850 CC app/fio/bdev/fio_plugin.o 00:03:08.109 LINK fused_ordering 00:03:08.109 CXX test/cpp_headers/gpt_spec.o 00:03:08.109 CXX test/cpp_headers/sock.o 00:03:08.367 CXX test/cpp_headers/vmd.o 00:03:08.368 CC examples/bdev/hello_world/hello_bdev.o 00:03:08.368 CC examples/bdev/bdevperf/bdevperf.o 00:03:08.368 CXX test/cpp_headers/rpc.o 00:03:08.627 LINK xor_ut 00:03:08.627 LINK spdk_bdev 00:03:08.627 LINK hello_bdev 00:03:08.627 CXX test/cpp_headers/accel_module.o 00:03:08.627 CC test/event/reactor/reactor.o 00:03:08.885 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:08.885 CXX test/cpp_headers/bit_pool.o 00:03:08.885 LINK reactor 00:03:09.148 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:09.148 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:09.148 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:09.405 LINK bdevperf 00:03:09.405 LINK doorbell_aers 00:03:09.405 CC test/lvol/esnap/esnap.o 00:03:09.663 CC test/event/reactor_perf/reactor_perf.o 00:03:09.663 LINK json_util_ut 00:03:09.921 LINK reactor_perf 00:03:10.179 LINK json_write_ut 00:03:10.179 CC test/event/app_repeat/app_repeat.o 00:03:10.179 CC test/bdev/bdevio/bdevio.o 00:03:10.179 LINK app_repeat 00:03:10.179 CC test/nvme/fdp/fdp.o 00:03:10.436 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:10.695 LINK bdevio 00:03:10.695 LINK fdp 00:03:10.953 CC test/nvme/cuse/cuse.o 00:03:10.953 LINK pci_event_ut 00:03:11.212 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:11.470 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:11.470 CC test/event/scheduler/scheduler.o 00:03:11.727 LINK json_parse_ut 00:03:11.727 LINK scheduler 00:03:11.985 LINK idxd_user_ut 00:03:11.985 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:11.985 LINK cuse 00:03:12.243 LINK jsonrpc_server_ut 00:03:12.501 LINK idxd_ut 00:03:12.759 CC examples/nvmf/nvmf/nvmf.o 00:03:12.759 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:13.017 LINK nvmf 00:03:13.950 LINK rpc_ut 00:03:14.243 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:14.243 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:14.243 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:14.243 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:14.243 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:14.243 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:14.808 LINK keyring_ut 00:03:15.066 LINK notify_ut 00:03:15.323 LINK iobuf_ut 00:03:15.580 LINK posix_ut 00:03:15.837 LINK esnap 00:03:16.095 LINK sock_ut 00:03:16.659 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:16.659 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:16.659 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:16.659 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:16.659 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:16.659 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:16.659 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:16.659 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:16.659 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:16.916 LINK thread_ut 00:03:17.174 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:17.739 LINK nvme_ns_ut 00:03:17.739 LINK nvme_poll_group_ut 00:03:17.997 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:17.997 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:17.997 LINK nvme_ctrlr_cmd_ut 00:03:17.997 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:18.254 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:18.254 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:18.254 LINK nvme_ut 00:03:18.511 LINK nvme_quirks_ut 00:03:18.511 LINK nvme_ns_ocssd_cmd_ut 00:03:18.511 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:18.511 LINK nvme_qpair_ut 00:03:18.768 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:18.768 LINK nvme_ns_cmd_ut 00:03:18.768 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:18.768 LINK nvme_pcie_ut 00:03:18.768 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:19.025 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:19.025 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:19.282 LINK nvme_transport_ut 00:03:19.282 LINK nvme_io_msg_ut 00:03:19.539 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:19.798 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:19.798 LINK nvme_fabric_ut 00:03:19.798 LINK nvme_opal_ut 00:03:20.056 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:20.056 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:20.315 LINK nvme_pcie_common_ut 00:03:20.315 LINK nvme_ctrlr_ut 00:03:20.572 LINK blob_bdev_ut 00:03:20.572 LINK subsystem_ut 00:03:20.572 LINK rpc_ut 00:03:21.138 CC test/unit/lib/event/app.c/app_ut.o 00:03:21.138 LINK nvme_cuse_ut 00:03:21.138 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:21.396 LINK nvme_tcp_ut 00:03:21.655 LINK nvme_rdma_ut 00:03:21.913 LINK app_ut 00:03:22.171 LINK reactor_ut 00:03:22.171 LINK accel_ut 00:03:22.737 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:22.737 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:22.737 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:22.737 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:22.737 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:22.737 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:22.737 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:22.737 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:22.737 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:22.995 LINK scsi_nvme_ut 00:03:22.995 LINK bdev_zone_ut 00:03:23.254 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:23.254 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:23.512 LINK gpt_ut 00:03:23.769 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:23.769 LINK vbdev_zone_block_ut 00:03:24.027 LINK bdev_raid_sb_ut 00:03:24.285 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:24.542 LINK vbdev_lvol_ut 00:03:24.542 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:24.542 LINK concat_ut 00:03:25.107 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:25.107 LINK raid1_ut 00:03:25.365 LINK bdev_raid_ut 00:03:25.622 LINK raid0_ut 00:03:26.554 LINK raid5f_ut 00:03:26.812 LINK part_ut 00:03:27.070 LINK bdev_ut 00:03:28.003 LINK blob_ut 00:03:28.591 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:28.591 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:28.591 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:28.591 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:28.591 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:28.849 LINK tree_ut 00:03:28.849 LINK blobfs_bdev_ut 00:03:28.849 LINK bdev_nvme_ut 00:03:29.108 LINK bdev_ut 00:03:29.365 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:29.365 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:29.365 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:29.628 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:03:29.628 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:29.628 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:29.628 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:29.886 LINK ftl_bitmap_ut 00:03:29.886 LINK blobfs_sync_ut 00:03:29.886 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:30.144 LINK dev_ut 00:03:30.144 LINK blobfs_async_ut 00:03:30.144 LINK ftl_l2p_ut 00:03:30.144 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:30.401 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:30.401 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:30.401 LINK ftl_io_ut 00:03:30.401 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:30.658 LINK ftl_mempool_ut 00:03:30.658 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:30.658 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:30.929 LINK lvol_ut 00:03:30.929 LINK ftl_p2l_ut 00:03:30.929 LINK ftl_mngt_ut 00:03:31.263 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:31.263 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:31.263 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:31.263 LINK lun_ut 00:03:31.521 LINK ftl_band_ut 00:03:31.521 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:31.780 LINK scsi_ut 00:03:31.780 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:32.039 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:32.039 LINK ftl_sb_ut 00:03:32.039 LINK ftl_layout_upgrade_ut 00:03:32.297 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:32.297 LINK ctrlr_bdev_ut 00:03:32.556 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:32.556 LINK scsi_pr_ut 00:03:32.556 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:32.814 LINK scsi_bdev_ut 00:03:33.073 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:33.073 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:33.332 LINK ctrlr_discovery_ut 00:03:33.332 LINK subsystem_ut 00:03:33.591 LINK nvmf_ut 00:03:33.591 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:33.850 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:33.850 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:34.108 LINK tcp_ut 00:03:34.108 LINK auth_ut 00:03:34.108 LINK ctrlr_ut 00:03:34.367 LINK init_grp_ut 00:03:34.367 LINK param_ut 00:03:34.367 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:34.631 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:34.890 LINK conn_ut 00:03:35.825 LINK portal_grp_ut 00:03:35.825 LINK tgt_node_ut 00:03:35.825 LINK vhost_ut 00:03:36.393 LINK transport_ut 00:03:36.393 LINK rdma_ut 00:03:36.393 LINK iscsi_ut 00:03:36.958 00:03:36.958 real 2m6.015s 00:03:36.958 user 10m26.087s 00:03:36.958 sys 1m54.402s 00:03:36.958 11:15:11 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:36.958 11:15:11 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:36.958 ************************************ 00:03:36.958 END TEST unittest_build 00:03:36.958 ************************************ 00:03:36.958 11:15:11 -- common/autotest_common.sh@1142 -- $ return 0 00:03:36.958 11:15:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:36.958 11:15:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:36.958 11:15:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:36.958 11:15:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:36.958 11:15:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:36.958 11:15:11 -- pm/common@44 -- $ pid=2421 00:03:36.958 11:15:11 -- pm/common@50 -- $ kill -TERM 2421 00:03:36.958 11:15:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:36.958 11:15:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:36.958 11:15:11 -- pm/common@44 -- $ pid=2422 00:03:36.958 11:15:11 -- pm/common@50 -- $ kill -TERM 2422 00:03:36.958 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 00:03:36.958 11:15:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:36.958 11:15:11 -- nvmf/common.sh@7 -- # uname -s 00:03:36.958 11:15:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:36.958 11:15:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:36.958 11:15:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:36.958 11:15:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:36.958 11:15:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:36.958 11:15:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:36.958 11:15:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:36.958 11:15:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:36.959 11:15:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:36.959 11:15:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:36.959 11:15:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:66b3466a-8d3a-4cfa-8840-0372b965fb48 00:03:36.959 11:15:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=66b3466a-8d3a-4cfa-8840-0372b965fb48 00:03:36.959 11:15:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:36.959 11:15:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:36.959 11:15:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:36.959 11:15:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:36.959 11:15:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:36.959 11:15:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:36.959 11:15:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:36.959 11:15:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:36.959 11:15:11 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:36.959 11:15:11 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:36.959 11:15:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:36.959 11:15:11 -- paths/export.sh@5 -- # export PATH 00:03:36.959 11:15:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:03:36.959 11:15:11 -- nvmf/common.sh@47 -- # : 0 00:03:36.959 11:15:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:36.959 11:15:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:36.959 11:15:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:36.959 11:15:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:36.959 11:15:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:36.959 11:15:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:36.959 11:15:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:36.959 11:15:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:36.959 11:15:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:36.959 11:15:11 -- spdk/autotest.sh@32 -- # uname -s 00:03:36.959 11:15:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:36.959 11:15:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:36.959 11:15:11 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:36.959 11:15:11 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:36.959 11:15:11 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:36.959 11:15:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:37.526 11:15:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:37.526 11:15:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:37.526 11:15:12 -- spdk/autotest.sh@48 -- # udevadm_pid=98912 00:03:37.526 11:15:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:37.526 11:15:12 -- pm/common@17 -- # local monitor 00:03:37.526 11:15:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.526 11:15:12 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:37.526 11:15:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:37.526 11:15:12 -- pm/common@25 -- # sleep 1 00:03:37.526 11:15:12 -- pm/common@21 -- # date +%s 00:03:37.526 11:15:12 -- pm/common@21 -- # date +%s 00:03:37.526 11:15:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720869312 00:03:37.526 11:15:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720869312 00:03:37.526 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720869312_collect-vmstat.pm.log 00:03:37.526 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720869312_collect-cpu-load.pm.log 00:03:38.461 11:15:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:38.461 11:15:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:38.461 11:15:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:38.461 11:15:13 -- common/autotest_common.sh@10 -- # set +x 00:03:38.719 11:15:13 -- spdk/autotest.sh@59 -- # create_test_list 00:03:38.719 11:15:13 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:38.719 11:15:13 -- common/autotest_common.sh@10 -- # set +x 00:03:38.719 11:15:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:38.719 11:15:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:38.719 11:15:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:38.719 11:15:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:38.719 11:15:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:38.719 11:15:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:38.719 11:15:13 -- common/autotest_common.sh@1455 -- # uname 00:03:38.719 11:15:13 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:38.719 11:15:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:38.719 11:15:13 -- common/autotest_common.sh@1475 -- # uname 00:03:38.719 11:15:13 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:38.719 11:15:13 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:38.719 11:15:13 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:38.719 11:15:13 -- spdk/autotest.sh@72 -- # hash lcov 00:03:38.719 11:15:13 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:38.719 11:15:13 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:38.719 --rc lcov_branch_coverage=1 00:03:38.719 --rc lcov_function_coverage=1 00:03:38.719 --rc genhtml_branch_coverage=1 00:03:38.719 --rc genhtml_function_coverage=1 00:03:38.719 --rc genhtml_legend=1 00:03:38.719 --rc geninfo_all_blocks=1 00:03:38.719 ' 00:03:38.719 11:15:13 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:38.719 --rc lcov_branch_coverage=1 00:03:38.719 --rc lcov_function_coverage=1 00:03:38.719 --rc genhtml_branch_coverage=1 00:03:38.719 --rc genhtml_function_coverage=1 00:03:38.719 --rc genhtml_legend=1 00:03:38.719 --rc geninfo_all_blocks=1 00:03:38.719 ' 00:03:38.719 11:15:13 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:38.719 --rc lcov_branch_coverage=1 00:03:38.719 --rc lcov_function_coverage=1 00:03:38.719 --rc genhtml_branch_coverage=1 00:03:38.719 --rc genhtml_function_coverage=1 00:03:38.719 --rc genhtml_legend=1 00:03:38.719 --rc geninfo_all_blocks=1 00:03:38.719 --no-external' 00:03:38.719 11:15:13 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:38.719 --rc lcov_branch_coverage=1 00:03:38.719 --rc lcov_function_coverage=1 00:03:38.719 --rc genhtml_branch_coverage=1 00:03:38.719 --rc genhtml_function_coverage=1 00:03:38.719 --rc genhtml_legend=1 00:03:38.719 --rc geninfo_all_blocks=1 00:03:38.719 --no-external' 00:03:38.719 11:15:13 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:38.719 lcov: LCOV version 1.15 00:03:38.719 11:15:13 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:40.622 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:40.622 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:40.623 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:40.623 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:40.882 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:40.882 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:37.107 11:16:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:37.107 11:16:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:37.107 11:16:03 -- common/autotest_common.sh@10 -- # set +x 00:04:37.107 11:16:03 -- spdk/autotest.sh@91 -- # rm -f 00:04:37.107 11:16:03 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.107 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:37.107 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:37.107 11:16:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:37.107 11:16:03 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:37.107 11:16:03 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:37.107 11:16:03 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:37.107 11:16:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.107 11:16:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:37.107 11:16:03 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:37.107 11:16:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.107 11:16:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.107 11:16:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:37.107 11:16:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.107 11:16:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:37.107 11:16:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:37.107 11:16:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:37.107 11:16:03 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:37.107 No valid GPT data, bailing 00:04:37.107 11:16:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:37.107 11:16:03 -- scripts/common.sh@391 -- # pt= 00:04:37.107 11:16:03 -- scripts/common.sh@392 -- # return 1 00:04:37.107 11:16:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:37.107 1+0 records in 00:04:37.107 1+0 records out 00:04:37.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321139 s, 32.7 MB/s 00:04:37.107 11:16:03 -- spdk/autotest.sh@118 -- # sync 00:04:37.107 11:16:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:37.107 11:16:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:37.107 11:16:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:37.107 11:16:05 -- spdk/autotest.sh@124 -- # uname -s 00:04:37.107 11:16:05 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:37.107 11:16:05 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.107 11:16:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.107 11:16:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.107 11:16:05 -- common/autotest_common.sh@10 -- # set +x 00:04:37.107 ************************************ 00:04:37.107 START TEST setup.sh 00:04:37.107 ************************************ 00:04:37.107 11:16:05 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:37.107 * Looking for test storage... 00:04:37.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.107 11:16:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:37.107 11:16:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:37.107 11:16:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.107 11:16:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.107 11:16:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.107 11:16:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.107 ************************************ 00:04:37.107 START TEST acl 00:04:37.107 ************************************ 00:04:37.107 11:16:05 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:37.107 * Looking for test storage... 00:04:37.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:37.107 11:16:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:37.107 11:16:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:37.107 11:16:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:37.107 11:16:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:37.107 11:16:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:37.107 11:16:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:37.107 11:16:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.107 11:16:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:37.107 11:16:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.107 11:16:05 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:37.107 11:16:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.107 11:16:05 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.107 11:16:05 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.107 Hugepages 00:04:37.107 node hugesize free / total 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.107 00:04:37.107 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:37.107 11:16:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:37.107 11:16:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.107 11:16:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.107 11:16:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.107 ************************************ 00:04:37.107 START TEST denied 00:04:37.107 ************************************ 00:04:37.107 11:16:06 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:37.107 11:16:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:37.107 11:16:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:37.107 11:16:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:37.107 11:16:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.107 11:16:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.107 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:37.107 11:16:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:37.107 11:16:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:37.107 11:16:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:37.107 11:16:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:37.107 11:16:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:37.107 11:16:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:37.107 11:16:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:37.108 11:16:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:37.108 11:16:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.108 11:16:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.108 00:04:37.108 real 0m1.807s 00:04:37.108 user 0m0.457s 00:04:37.108 sys 0m1.408s 00:04:37.108 11:16:08 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.108 ************************************ 00:04:37.108 END TEST denied 00:04:37.108 ************************************ 00:04:37.108 11:16:08 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:37.108 11:16:08 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:37.108 11:16:08 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:37.108 11:16:08 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.108 11:16:08 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.108 11:16:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.108 ************************************ 00:04:37.108 START TEST allowed 00:04:37.108 ************************************ 00:04:37.108 11:16:08 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:37.108 11:16:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:37.108 11:16:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:37.108 11:16:08 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:37.108 11:16:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.108 11:16:08 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:37.108 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.108 11:16:09 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:37.108 11:16:09 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:37.108 11:16:09 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:37.108 11:16:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.108 11:16:09 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.108 00:04:37.108 real 0m1.956s 00:04:37.108 user 0m0.459s 00:04:37.108 sys 0m1.455s 00:04:37.108 ************************************ 00:04:37.108 END TEST allowed 00:04:37.108 ************************************ 00:04:37.108 11:16:10 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.108 11:16:10 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:37.108 11:16:10 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:37.108 00:04:37.108 real 0m4.925s 00:04:37.108 user 0m1.584s 00:04:37.108 sys 0m3.410s 00:04:37.108 11:16:10 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.108 11:16:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.108 ************************************ 00:04:37.108 END TEST acl 00:04:37.108 ************************************ 00:04:37.108 11:16:10 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:37.108 11:16:10 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:37.108 11:16:10 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.108 11:16:10 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.108 11:16:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.108 ************************************ 00:04:37.108 START TEST hugepages 00:04:37.108 ************************************ 00:04:37.108 11:16:10 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:37.108 * Looking for test storage... 00:04:37.108 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 2792816 kB' 'MemAvailable: 7414600 kB' 'Buffers: 37720 kB' 'Cached: 4701700 kB' 'SwapCached: 0 kB' 'Active: 1230688 kB' 'Inactive: 3629844 kB' 'Active(anon): 130132 kB' 'Inactive(anon): 1804 kB' 'Active(file): 1100556 kB' 'Inactive(file): 3628040 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 464 kB' 'Writeback: 0 kB' 'AnonPages: 139284 kB' 'Mapped: 73036 kB' 'Shmem: 2624 kB' 'KReclaimable: 215260 kB' 'Slab: 306484 kB' 'SReclaimable: 215260 kB' 'SUnreclaim: 91224 kB' 'KernelStack: 4664 kB' 'PageTables: 3464 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4028396 kB' 'Committed_AS: 621972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14228 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.108 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:37.109 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:37.110 11:16:10 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:37.110 11:16:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.110 11:16:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.110 11:16:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.110 ************************************ 00:04:37.110 START TEST default_setup 00:04:37.110 ************************************ 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.110 11:16:10 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:37.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:37.110 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4886392 kB' 'MemAvailable: 9508092 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237360 kB' 'Inactive: 3629828 kB' 'Active(anon): 136784 kB' 'Inactive(anon): 1796 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 146032 kB' 'Mapped: 73040 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306392 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91228 kB' 'KernelStack: 4448 kB' 'PageTables: 3376 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 634892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14260 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.110 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.111 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4886392 kB' 'MemAvailable: 9508092 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237620 kB' 'Inactive: 3629828 kB' 'Active(anon): 137044 kB' 'Inactive(anon): 1796 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 146292 kB' 'Mapped: 73040 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306392 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91228 kB' 'KernelStack: 4448 kB' 'PageTables: 3376 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 634892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.112 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4886392 kB' 'MemAvailable: 9508092 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237844 kB' 'Inactive: 3629828 kB' 'Active(anon): 137268 kB' 'Inactive(anon): 1796 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 146156 kB' 'Mapped: 73040 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306392 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91228 kB' 'KernelStack: 4432 kB' 'PageTables: 3372 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 634892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.113 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.114 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:37.115 nr_hugepages=1024 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.115 resv_hugepages=0 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.115 surplus_hugepages=0 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.115 anon_hugepages=0 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4886620 kB' 'MemAvailable: 9508320 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237620 kB' 'Inactive: 3629832 kB' 'Active(anon): 137044 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 146320 kB' 'Mapped: 73040 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306392 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91228 kB' 'KernelStack: 4564 kB' 'PageTables: 3484 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 640032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.115 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.116 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4886516 kB' 'MemUsed: 7364580 kB' 'Active: 1237704 kB' 'Inactive: 3629832 kB' 'Active(anon): 137128 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'FilePages: 4739428 kB' 'Mapped: 73040 kB' 'AnonPages: 146960 kB' 'Shmem: 2616 kB' 'KernelStack: 4700 kB' 'PageTables: 3872 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215164 kB' 'Slab: 306392 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.117 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.118 node0=1024 expecting 1024 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:37.118 00:04:37.118 real 0m1.085s 00:04:37.118 user 0m0.265s 00:04:37.118 sys 0m0.792s 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.118 11:16:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:37.118 ************************************ 00:04:37.118 END TEST default_setup 00:04:37.118 ************************************ 00:04:37.118 11:16:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:37.118 11:16:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:37.118 11:16:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.118 11:16:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.118 11:16:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.118 ************************************ 00:04:37.118 START TEST per_node_1G_alloc 00:04:37.118 ************************************ 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.118 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:37.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:37.118 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5933156 kB' 'MemAvailable: 10554856 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237192 kB' 'Inactive: 3629832 kB' 'Active(anon): 136616 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 146668 kB' 'Mapped: 73024 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306564 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91400 kB' 'KernelStack: 4568 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 643272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.413 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.414 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5933416 kB' 'MemAvailable: 10555116 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237452 kB' 'Inactive: 3629832 kB' 'Active(anon): 136876 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 146540 kB' 'Mapped: 73024 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306564 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91400 kB' 'KernelStack: 4568 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 637036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.415 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5933684 kB' 'MemAvailable: 10555384 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237712 kB' 'Inactive: 3629832 kB' 'Active(anon): 137136 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 146800 kB' 'Mapped: 73024 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306564 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91400 kB' 'KernelStack: 4568 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 642288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.416 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.417 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.418 nr_hugepages=512 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:37.418 resv_hugepages=0 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.418 surplus_hugepages=0 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.418 anon_hugepages=0 00:04:37.418 11:16:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5933944 kB' 'MemAvailable: 10555644 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237452 kB' 'Inactive: 3629832 kB' 'Active(anon): 136876 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 146412 kB' 'Mapped: 73024 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306564 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91400 kB' 'KernelStack: 4568 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 641372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.418 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.419 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5933636 kB' 'MemUsed: 6317460 kB' 'Active: 1237712 kB' 'Inactive: 3629832 kB' 'Active(anon): 137136 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100576 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'FilePages: 4739428 kB' 'Mapped: 73024 kB' 'AnonPages: 146672 kB' 'Shmem: 2616 kB' 'KernelStack: 4568 kB' 'PageTables: 3664 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215164 kB' 'Slab: 306564 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.420 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.421 node0=512 expecting 512 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:37.421 00:04:37.421 real 0m0.629s 00:04:37.421 user 0m0.266s 00:04:37.421 sys 0m0.396s 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.421 ************************************ 00:04:37.421 END TEST per_node_1G_alloc 00:04:37.421 11:16:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:37.421 ************************************ 00:04:37.421 11:16:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:37.421 11:16:12 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:37.421 11:16:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.421 11:16:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.421 11:16:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.421 ************************************ 00:04:37.421 START TEST even_2G_alloc 00:04:37.421 ************************************ 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.421 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:37.685 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:37.685 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4883972 kB' 'MemAvailable: 9505676 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237520 kB' 'Inactive: 3629832 kB' 'Active(anon): 136940 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100580 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 146104 kB' 'Mapped: 73060 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306616 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91452 kB' 'KernelStack: 4540 kB' 'PageTables: 3604 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 633788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.257 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884232 kB' 'MemAvailable: 9505936 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237780 kB' 'Inactive: 3629832 kB' 'Active(anon): 137200 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100580 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 146364 kB' 'Mapped: 73060 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306616 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91452 kB' 'KernelStack: 4540 kB' 'PageTables: 3604 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 633788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.258 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.259 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884232 kB' 'MemAvailable: 9505936 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237616 kB' 'Inactive: 3629832 kB' 'Active(anon): 137036 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100580 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 146460 kB' 'Mapped: 73060 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306616 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91452 kB' 'KernelStack: 4524 kB' 'PageTables: 3576 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 639064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.260 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:38.261 nr_hugepages=1024 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.261 resv_hugepages=0 00:04:38.261 surplus_hugepages=0 00:04:38.261 anon_hugepages=0 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.261 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884232 kB' 'MemAvailable: 9505936 kB' 'Buffers: 37720 kB' 'Cached: 4701708 kB' 'SwapCached: 0 kB' 'Active: 1237876 kB' 'Inactive: 3629832 kB' 'Active(anon): 137296 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100580 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'AnonPages: 146720 kB' 'Mapped: 73060 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306616 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91452 kB' 'KernelStack: 4592 kB' 'PageTables: 3576 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 645028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.262 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884492 kB' 'MemUsed: 7366604 kB' 'Active: 1237876 kB' 'Inactive: 3629832 kB' 'Active(anon): 137296 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100580 kB' 'Inactive(file): 3628032 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 488 kB' 'Writeback: 0 kB' 'FilePages: 4739428 kB' 'Mapped: 73060 kB' 'AnonPages: 146332 kB' 'Shmem: 2616 kB' 'KernelStack: 4660 kB' 'PageTables: 3576 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215164 kB' 'Slab: 306616 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.263 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.264 node0=1024 expecting 1024 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.264 00:04:38.264 real 0m0.873s 00:04:38.264 user 0m0.244s 00:04:38.264 sys 0m0.657s 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.264 11:16:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:38.264 ************************************ 00:04:38.264 END TEST even_2G_alloc 00:04:38.264 ************************************ 00:04:38.264 11:16:12 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:38.264 11:16:12 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:38.264 11:16:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.264 11:16:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.264 11:16:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:38.522 ************************************ 00:04:38.522 START TEST odd_alloc 00:04:38.522 ************************************ 00:04:38.522 11:16:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.523 11:16:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:38.781 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:38.781 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4882808 kB' 'MemAvailable: 9504516 kB' 'Buffers: 37720 kB' 'Cached: 4701712 kB' 'SwapCached: 0 kB' 'Active: 1238080 kB' 'Inactive: 3629836 kB' 'Active(anon): 137500 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100580 kB' 'Inactive(file): 3628036 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 496 kB' 'Writeback: 0 kB' 'AnonPages: 146868 kB' 'Mapped: 73068 kB' 'Shmem: 2616 kB' 'KReclaimable: 215164 kB' 'Slab: 306812 kB' 'SReclaimable: 215164 kB' 'SUnreclaim: 91648 kB' 'KernelStack: 4576 kB' 'PageTables: 4008 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 634964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14244 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.354 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4883028 kB' 'MemAvailable: 9504808 kB' 'Buffers: 37720 kB' 'Cached: 4701716 kB' 'SwapCached: 0 kB' 'Active: 1238304 kB' 'Inactive: 3629840 kB' 'Active(anon): 137724 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100580 kB' 'Inactive(file): 3628040 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 500 kB' 'Writeback: 0 kB' 'AnonPages: 147132 kB' 'Mapped: 73016 kB' 'Shmem: 2616 kB' 'KReclaimable: 215232 kB' 'Slab: 306984 kB' 'SReclaimable: 215232 kB' 'SUnreclaim: 91752 kB' 'KernelStack: 4492 kB' 'PageTables: 3916 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 634964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14244 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.355 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.356 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4883488 kB' 'MemAvailable: 9505300 kB' 'Buffers: 37720 kB' 'Cached: 4701716 kB' 'SwapCached: 0 kB' 'Active: 1238200 kB' 'Inactive: 3629840 kB' 'Active(anon): 137620 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100580 kB' 'Inactive(file): 3628040 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 504 kB' 'Writeback: 0 kB' 'AnonPages: 146900 kB' 'Mapped: 73016 kB' 'Shmem: 2616 kB' 'KReclaimable: 215264 kB' 'Slab: 306992 kB' 'SReclaimable: 215264 kB' 'SUnreclaim: 91728 kB' 'KernelStack: 4444 kB' 'PageTables: 3816 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 634964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14260 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.357 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.358 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.359 nr_hugepages=1025 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:39.359 resv_hugepages=0 00:04:39.359 surplus_hugepages=0 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.359 anon_hugepages=0 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4883748 kB' 'MemAvailable: 9505560 kB' 'Buffers: 37720 kB' 'Cached: 4701716 kB' 'SwapCached: 0 kB' 'Active: 1238200 kB' 'Inactive: 3629840 kB' 'Active(anon): 137620 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100580 kB' 'Inactive(file): 3628040 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 504 kB' 'Writeback: 0 kB' 'AnonPages: 147292 kB' 'Mapped: 73016 kB' 'Shmem: 2616 kB' 'KReclaimable: 215264 kB' 'Slab: 306992 kB' 'SReclaimable: 215264 kB' 'SUnreclaim: 91728 kB' 'KernelStack: 4512 kB' 'PageTables: 3428 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5075948 kB' 'Committed_AS: 635088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14260 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.359 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884064 kB' 'MemUsed: 7367032 kB' 'Active: 1238544 kB' 'Inactive: 3629856 kB' 'Active(anon): 137960 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100584 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 508 kB' 'Writeback: 0 kB' 'FilePages: 4739464 kB' 'Mapped: 73028 kB' 'AnonPages: 147452 kB' 'Shmem: 2616 kB' 'KernelStack: 4564 kB' 'PageTables: 3468 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215264 kB' 'Slab: 306928 kB' 'SReclaimable: 215264 kB' 'SUnreclaim: 91664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.360 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:39.361 node0=1025 expecting 1025 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:39.361 00:04:39.361 real 0m0.877s 00:04:39.361 user 0m0.262s 00:04:39.361 sys 0m0.648s 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.361 11:16:13 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.361 ************************************ 00:04:39.361 END TEST odd_alloc 00:04:39.362 ************************************ 00:04:39.362 11:16:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:39.362 11:16:13 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:39.362 11:16:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.362 11:16:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.362 11:16:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.362 ************************************ 00:04:39.362 START TEST custom_alloc 00:04:39.362 ************************************ 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.362 11:16:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:39.621 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5933880 kB' 'MemAvailable: 10555696 kB' 'Buffers: 37720 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238300 kB' 'Inactive: 3629860 kB' 'Active(anon): 137712 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100588 kB' 'Inactive(file): 3628060 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 146584 kB' 'Mapped: 73208 kB' 'Shmem: 2616 kB' 'KReclaimable: 215240 kB' 'Slab: 305824 kB' 'SReclaimable: 215240 kB' 'SUnreclaim: 90584 kB' 'KernelStack: 4548 kB' 'PageTables: 3456 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 623084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14260 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.885 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5933880 kB' 'MemAvailable: 10555696 kB' 'Buffers: 37720 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238560 kB' 'Inactive: 3629860 kB' 'Active(anon): 137972 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100588 kB' 'Inactive(file): 3628060 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 146456 kB' 'Mapped: 73208 kB' 'Shmem: 2616 kB' 'KReclaimable: 215240 kB' 'Slab: 305824 kB' 'SReclaimable: 215240 kB' 'SUnreclaim: 90584 kB' 'KernelStack: 4548 kB' 'PageTables: 3456 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 629148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.886 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.887 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5934220 kB' 'MemAvailable: 10556036 kB' 'Buffers: 37720 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238232 kB' 'Inactive: 3629856 kB' 'Active(anon): 137640 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100592 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 146716 kB' 'Mapped: 73112 kB' 'Shmem: 2616 kB' 'KReclaimable: 215240 kB' 'Slab: 305824 kB' 'SReclaimable: 215240 kB' 'SUnreclaim: 90584 kB' 'KernelStack: 4500 kB' 'PageTables: 3380 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 633940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14276 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.888 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.889 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:39.889 nr_hugepages=512 00:04:39.889 resv_hugepages=0 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.890 surplus_hugepages=0 00:04:39.890 anon_hugepages=0 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5934488 kB' 'MemAvailable: 10556304 kB' 'Buffers: 37720 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1237868 kB' 'Inactive: 3629856 kB' 'Active(anon): 137276 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100592 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 146484 kB' 'Mapped: 73112 kB' 'Shmem: 2616 kB' 'KReclaimable: 215240 kB' 'Slab: 305824 kB' 'SReclaimable: 215240 kB' 'SUnreclaim: 90584 kB' 'KernelStack: 4496 kB' 'PageTables: 3492 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5601260 kB' 'Committed_AS: 638776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.890 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 5934172 kB' 'MemUsed: 6316924 kB' 'Active: 1237868 kB' 'Inactive: 3629856 kB' 'Active(anon): 137276 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100592 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 4739464 kB' 'Mapped: 73112 kB' 'AnonPages: 147264 kB' 'Shmem: 2616 kB' 'KernelStack: 4496 kB' 'PageTables: 3492 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215240 kB' 'Slab: 305824 kB' 'SReclaimable: 215240 kB' 'SUnreclaim: 90584 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.891 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.892 node0=512 expecting 512 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:39.892 00:04:39.892 real 0m0.632s 00:04:39.892 user 0m0.237s 00:04:39.892 sys 0m0.425s 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.892 11:16:14 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.892 ************************************ 00:04:39.892 END TEST custom_alloc 00:04:39.892 ************************************ 00:04:39.892 11:16:14 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:39.892 11:16:14 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:39.892 11:16:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.892 11:16:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.892 11:16:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.892 ************************************ 00:04:39.892 START TEST no_shrink_alloc 00:04:39.892 ************************************ 00:04:39.892 11:16:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.893 11:16:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:40.151 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884480 kB' 'MemAvailable: 9506276 kB' 'Buffers: 37728 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238368 kB' 'Inactive: 3629856 kB' 'Active(anon): 137768 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100600 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 146996 kB' 'Mapped: 72804 kB' 'Shmem: 2616 kB' 'KReclaimable: 215212 kB' 'Slab: 306332 kB' 'SReclaimable: 215212 kB' 'SUnreclaim: 91120 kB' 'KernelStack: 4480 kB' 'PageTables: 3460 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 638492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.722 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.723 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4885000 kB' 'MemAvailable: 9506796 kB' 'Buffers: 37728 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238424 kB' 'Inactive: 3629856 kB' 'Active(anon): 137824 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100600 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 147264 kB' 'Mapped: 72868 kB' 'Shmem: 2616 kB' 'KReclaimable: 215212 kB' 'Slab: 306332 kB' 'SReclaimable: 215212 kB' 'SUnreclaim: 91120 kB' 'KernelStack: 4480 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 633656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14260 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.724 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.725 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4885260 kB' 'MemAvailable: 9507056 kB' 'Buffers: 37728 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238684 kB' 'Inactive: 3629856 kB' 'Active(anon): 138084 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100600 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 147524 kB' 'Mapped: 72868 kB' 'Shmem: 2616 kB' 'KReclaimable: 215212 kB' 'Slab: 306332 kB' 'SReclaimable: 215212 kB' 'SUnreclaim: 91120 kB' 'KernelStack: 4480 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 627064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14260 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.726 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.727 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:40.728 nr_hugepages=1024 00:04:40.728 resv_hugepages=0 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.728 surplus_hugepages=0 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.728 anon_hugepages=0 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.728 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4885780 kB' 'MemAvailable: 9507576 kB' 'Buffers: 37728 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238944 kB' 'Inactive: 3629856 kB' 'Active(anon): 138344 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100600 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 147656 kB' 'Mapped: 72868 kB' 'Shmem: 2616 kB' 'KReclaimable: 215212 kB' 'Slab: 306332 kB' 'SReclaimable: 215212 kB' 'SUnreclaim: 91120 kB' 'KernelStack: 4616 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 629704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14292 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.729 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.730 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4886324 kB' 'MemUsed: 7364772 kB' 'Active: 1238456 kB' 'Inactive: 3629856 kB' 'Active(anon): 137856 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100600 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 4739472 kB' 'Mapped: 73128 kB' 'AnonPages: 147216 kB' 'Shmem: 2616 kB' 'KernelStack: 4568 kB' 'PageTables: 3356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215212 kB' 'Slab: 306332 kB' 'SReclaimable: 215212 kB' 'SUnreclaim: 91120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.731 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:40.732 node0=1024 expecting 1024 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.732 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:40.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:41.253 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:41.253 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4883300 kB' 'MemAvailable: 9505096 kB' 'Buffers: 37728 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238556 kB' 'Inactive: 3629856 kB' 'Active(anon): 137956 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100600 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 148500 kB' 'Mapped: 73188 kB' 'Shmem: 2616 kB' 'KReclaimable: 215212 kB' 'Slab: 306348 kB' 'SReclaimable: 215212 kB' 'SUnreclaim: 91136 kB' 'KernelStack: 4744 kB' 'PageTables: 4048 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 633320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.253 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.254 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4883576 kB' 'MemAvailable: 9505372 kB' 'Buffers: 37728 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238400 kB' 'Inactive: 3629856 kB' 'Active(anon): 137800 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100600 kB' 'Inactive(file): 3628056 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 147808 kB' 'Mapped: 73140 kB' 'Shmem: 2616 kB' 'KReclaimable: 215212 kB' 'Slab: 306348 kB' 'SReclaimable: 215212 kB' 'SUnreclaim: 91136 kB' 'KernelStack: 4728 kB' 'PageTables: 4016 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 633116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.255 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4883624 kB' 'MemAvailable: 9505436 kB' 'Buffers: 37728 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238392 kB' 'Inactive: 3629852 kB' 'Active(anon): 137788 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100604 kB' 'Inactive(file): 3628052 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 147620 kB' 'Mapped: 73140 kB' 'Shmem: 2616 kB' 'KReclaimable: 215228 kB' 'Slab: 306552 kB' 'SReclaimable: 215228 kB' 'SUnreclaim: 91324 kB' 'KernelStack: 4672 kB' 'PageTables: 3788 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 633116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.256 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.257 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.258 nr_hugepages=1024 00:04:41.258 resv_hugepages=0 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.258 surplus_hugepages=0 00:04:41.258 anon_hugepages=0 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884348 kB' 'MemAvailable: 9506160 kB' 'Buffers: 37728 kB' 'Cached: 4701744 kB' 'SwapCached: 0 kB' 'Active: 1238468 kB' 'Inactive: 3629852 kB' 'Active(anon): 137864 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100604 kB' 'Inactive(file): 3628052 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'AnonPages: 147432 kB' 'Mapped: 72896 kB' 'Shmem: 2616 kB' 'KReclaimable: 215228 kB' 'Slab: 306744 kB' 'SReclaimable: 215228 kB' 'SUnreclaim: 91516 kB' 'KernelStack: 4672 kB' 'PageTables: 3824 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5076972 kB' 'Committed_AS: 628288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14308 kB' 'VmallocChunk: 0 kB' 'Percpu: 8592 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.258 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.259 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12251096 kB' 'MemFree: 4884072 kB' 'MemUsed: 7367024 kB' 'Active: 1238728 kB' 'Inactive: 3629852 kB' 'Active(anon): 138124 kB' 'Inactive(anon): 1800 kB' 'Active(file): 1100604 kB' 'Inactive(file): 3628052 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 332 kB' 'Writeback: 0 kB' 'FilePages: 4739472 kB' 'Mapped: 72896 kB' 'AnonPages: 147044 kB' 'Shmem: 2616 kB' 'KernelStack: 4604 kB' 'PageTables: 3824 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 215228 kB' 'Slab: 306744 kB' 'SReclaimable: 215228 kB' 'SUnreclaim: 91516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.260 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.261 node0=1024 expecting 1024 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:41.261 00:04:41.261 real 0m1.257s 00:04:41.261 user 0m0.514s 00:04:41.261 sys 0m0.809s 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.261 11:16:15 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.261 ************************************ 00:04:41.261 END TEST no_shrink_alloc 00:04:41.261 ************************************ 00:04:41.261 11:16:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:41.261 11:16:15 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:41.261 11:16:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:41.261 11:16:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:41.261 11:16:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.261 11:16:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:41.261 11:16:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.261 11:16:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:41.261 11:16:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:41.261 11:16:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:41.261 00:04:41.261 real 0m5.748s 00:04:41.261 user 0m2.008s 00:04:41.261 sys 0m3.896s 00:04:41.261 11:16:15 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.261 ************************************ 00:04:41.261 END TEST hugepages 00:04:41.261 ************************************ 00:04:41.261 11:16:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.261 11:16:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:41.261 11:16:15 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:41.261 11:16:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.261 11:16:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.261 11:16:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:41.261 ************************************ 00:04:41.261 START TEST driver 00:04:41.261 ************************************ 00:04:41.261 11:16:15 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:41.520 * Looking for test storage... 00:04:41.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:41.520 11:16:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:41.520 11:16:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:41.520 11:16:16 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.778 11:16:16 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:41.778 11:16:16 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.778 11:16:16 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.778 11:16:16 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:41.778 ************************************ 00:04:41.778 START TEST guess_driver 00:04:41.778 ************************************ 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio.ko 00:04:41.778 insmod /lib/modules/5.4.0-176-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:41.778 Looking for driver=uio_pci_generic 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.778 11:16:16 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.344 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:42.344 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:42.344 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.344 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:42.344 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:42.344 11:16:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.278 11:16:17 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:43.278 11:16:17 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:43.278 11:16:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.278 11:16:17 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.842 00:04:43.842 real 0m1.917s 00:04:43.842 user 0m0.434s 00:04:43.842 sys 0m1.460s 00:04:43.842 ************************************ 00:04:43.842 11:16:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.842 11:16:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:43.842 END TEST guess_driver 00:04:43.842 ************************************ 00:04:43.842 11:16:18 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:43.842 00:04:43.842 real 0m2.450s 00:04:43.842 user 0m0.728s 00:04:43.842 sys 0m1.725s 00:04:43.842 11:16:18 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.842 11:16:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:43.842 ************************************ 00:04:43.842 END TEST driver 00:04:43.842 ************************************ 00:04:43.842 11:16:18 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:43.842 11:16:18 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:43.842 11:16:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.842 11:16:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.842 11:16:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.842 ************************************ 00:04:43.842 START TEST devices 00:04:43.842 ************************************ 00:04:43.842 11:16:18 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:43.842 * Looking for test storage... 00:04:43.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:43.842 11:16:18 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:43.842 11:16:18 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:43.842 11:16:18 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:43.842 11:16:18 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:44.408 11:16:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:44.408 11:16:18 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:44.408 11:16:18 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:44.408 11:16:18 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:44.408 11:16:18 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:44.408 11:16:18 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:44.408 11:16:18 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:44.408 11:16:18 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:44.408 11:16:18 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:44.408 11:16:18 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:44.408 11:16:18 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:44.408 No valid GPT data, bailing 00:04:44.408 11:16:19 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:44.408 11:16:19 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:44.408 11:16:19 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:44.408 11:16:19 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:44.408 11:16:19 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:44.408 11:16:19 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:44.408 11:16:19 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:44.408 11:16:19 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:44.408 11:16:19 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:44.408 11:16:19 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:44.408 11:16:19 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:44.408 11:16:19 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:44.408 11:16:19 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:44.408 11:16:19 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.408 11:16:19 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.408 11:16:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:44.408 ************************************ 00:04:44.408 START TEST nvme_mount 00:04:44.408 ************************************ 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:44.408 11:16:19 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:45.781 Creating new GPT entries in memory. 00:04:45.781 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:45.781 other utilities. 00:04:45.781 11:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:45.781 11:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.781 11:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:45.781 11:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:45.781 11:16:20 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:46.716 Creating new GPT entries in memory. 00:04:46.716 The operation has completed successfully. 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 103486 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.716 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.974 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.974 11:16:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.349 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.349 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:48.349 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:48.349 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.349 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:48.349 11:16:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.349 11:16:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:48.606 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:48.606 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:48.606 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:48.606 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.606 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:48.606 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.606 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:48.606 11:16:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.981 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.982 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.982 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.982 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.982 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.982 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:49.982 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.982 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.982 11:16:24 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.240 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:50.240 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:50.240 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:50.240 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.240 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:50.240 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.240 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:50.240 11:16:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.637 11:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.637 11:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:51.638 11:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:51.638 11:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:51.638 11:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:51.638 11:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.638 11:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.638 11:16:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.638 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:51.638 00:04:51.638 real 0m6.942s 00:04:51.638 user 0m0.685s 00:04:51.638 sys 0m4.133s 00:04:51.638 11:16:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.638 11:16:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:51.638 ************************************ 00:04:51.638 END TEST nvme_mount 00:04:51.638 ************************************ 00:04:51.638 11:16:26 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:51.638 11:16:26 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:51.638 11:16:26 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.638 11:16:26 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.638 11:16:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:51.638 ************************************ 00:04:51.638 START TEST dm_mount 00:04:51.638 ************************************ 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:51.638 11:16:26 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:52.572 Creating new GPT entries in memory. 00:04:52.573 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:52.573 other utilities. 00:04:52.573 11:16:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:52.573 11:16:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.573 11:16:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.573 11:16:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.573 11:16:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:53.508 Creating new GPT entries in memory. 00:04:53.508 The operation has completed successfully. 00:04:53.508 11:16:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:53.508 11:16:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.508 11:16:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.508 11:16:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.508 11:16:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:54.954 The operation has completed successfully. 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 103990 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:54.954 11:16:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:56.328 11:16:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.328 11:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:56.328 11:16:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:57.702 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:57.702 00:04:57.702 real 0m6.140s 00:04:57.702 user 0m0.492s 00:04:57.702 sys 0m2.402s 00:04:57.702 ************************************ 00:04:57.702 END TEST dm_mount 00:04:57.702 ************************************ 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.702 11:16:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:57.702 11:16:32 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:57.702 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:57.702 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:57.702 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:57.702 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:57.702 11:16:32 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:57.702 00:04:57.702 real 0m13.897s 00:04:57.702 user 0m1.637s 00:04:57.702 sys 0m6.824s 00:04:57.702 11:16:32 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.702 11:16:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:57.702 ************************************ 00:04:57.702 END TEST devices 00:04:57.702 ************************************ 00:04:57.702 11:16:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:57.702 00:04:57.702 real 0m27.286s 00:04:57.702 user 0m6.118s 00:04:57.702 sys 0m15.952s 00:04:57.702 11:16:32 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.702 11:16:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:57.702 ************************************ 00:04:57.702 END TEST setup.sh 00:04:57.702 ************************************ 00:04:57.702 11:16:32 -- common/autotest_common.sh@1142 -- # return 0 00:04:57.702 11:16:32 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:58.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:58.278 Hugepages 00:04:58.278 node hugesize free / total 00:04:58.278 node0 1048576kB 0 / 0 00:04:58.278 node0 2048kB 2048 / 2048 00:04:58.278 00:04:58.278 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:58.278 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:58.278 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:58.278 11:16:32 -- spdk/autotest.sh@130 -- # uname -s 00:04:58.278 11:16:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:58.278 11:16:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:58.278 11:16:32 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:58.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:04:58.848 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:59.781 11:16:34 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:01.156 11:16:35 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:01.156 11:16:35 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:01.156 11:16:35 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:01.156 11:16:35 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:01.156 11:16:35 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:01.156 11:16:35 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:01.156 11:16:35 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:01.156 11:16:35 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:01.156 11:16:35 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:01.156 11:16:35 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:01.156 11:16:35 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:01.156 11:16:35 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:01.156 Waiting for block devices as requested 00:05:01.414 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:01.414 11:16:35 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:01.415 11:16:35 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:01.415 11:16:35 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:01.415 11:16:35 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:01.415 11:16:35 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:01.415 11:16:35 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:05:01.415 11:16:35 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:01.415 11:16:35 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:01.415 11:16:35 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:01.415 11:16:35 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:01.415 11:16:35 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:01.415 11:16:35 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:01.415 11:16:35 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:01.415 11:16:35 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:01.415 11:16:35 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:01.415 11:16:35 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:01.415 11:16:35 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:01.415 11:16:35 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:01.415 11:16:35 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:01.415 11:16:35 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:01.415 11:16:35 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:01.415 11:16:35 -- common/autotest_common.sh@1557 -- # continue 00:05:01.415 11:16:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:01.415 11:16:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:01.415 11:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:01.415 11:16:36 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:01.415 11:16:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.415 11:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:01.415 11:16:36 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.674 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:01.932 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.867 11:16:37 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:02.867 11:16:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.867 11:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:02.867 11:16:37 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:02.867 11:16:37 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:02.867 11:16:37 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.867 11:16:37 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:02.867 11:16:37 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:02.867 11:16:37 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:02.867 11:16:37 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:02.867 11:16:37 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:02.867 11:16:37 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.867 11:16:37 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:02.867 11:16:37 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:03.128 11:16:37 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:03.128 11:16:37 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:05:03.128 11:16:37 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:03.128 11:16:37 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:03.128 11:16:37 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:03.128 11:16:37 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:03.128 11:16:37 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:03.128 11:16:37 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:03.128 11:16:37 -- common/autotest_common.sh@1593 -- # return 0 00:05:03.128 11:16:37 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:03.128 11:16:37 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:03.128 11:16:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.128 11:16:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.128 11:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:03.128 ************************************ 00:05:03.128 START TEST unittest 00:05:03.128 ************************************ 00:05:03.128 11:16:37 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:03.128 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:03.128 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:03.128 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:03.128 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:03.128 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:03.128 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:03.128 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:03.128 ++ rpc_py=rpc_cmd 00:05:03.128 ++ set -e 00:05:03.128 ++ shopt -s nullglob 00:05:03.128 ++ shopt -s extglob 00:05:03.128 ++ shopt -s inherit_errexit 00:05:03.128 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:03.128 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:03.128 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:03.128 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:03.128 +++ CONFIG_FIO_PLUGIN=y 00:05:03.128 +++ CONFIG_NVME_CUSE=y 00:05:03.128 +++ CONFIG_RAID5F=y 00:05:03.128 +++ CONFIG_LTO=n 00:05:03.128 +++ CONFIG_SMA=n 00:05:03.128 +++ CONFIG_ISAL=y 00:05:03.128 +++ CONFIG_OPENSSL_PATH= 00:05:03.128 +++ CONFIG_IDXD_KERNEL=n 00:05:03.128 +++ CONFIG_URING_PATH= 00:05:03.128 +++ CONFIG_DAOS=n 00:05:03.128 +++ CONFIG_DPDK_LIB_DIR= 00:05:03.128 +++ CONFIG_OCF=n 00:05:03.128 +++ CONFIG_EXAMPLES=y 00:05:03.128 +++ CONFIG_RDMA_PROV=verbs 00:05:03.128 +++ CONFIG_ISCSI_INITIATOR=y 00:05:03.128 +++ CONFIG_VTUNE=n 00:05:03.128 +++ CONFIG_DPDK_INC_DIR= 00:05:03.128 +++ CONFIG_CET=n 00:05:03.128 +++ CONFIG_TESTS=y 00:05:03.128 +++ CONFIG_APPS=y 00:05:03.128 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:03.128 +++ CONFIG_DAOS_DIR= 00:05:03.128 +++ CONFIG_CRYPTO_MLX5=n 00:05:03.128 +++ CONFIG_XNVME=n 00:05:03.128 +++ CONFIG_UNIT_TESTS=y 00:05:03.128 +++ CONFIG_FUSE=n 00:05:03.128 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:03.128 +++ CONFIG_OCF_PATH= 00:05:03.128 +++ CONFIG_WPDK_DIR= 00:05:03.128 +++ CONFIG_VFIO_USER=n 00:05:03.128 +++ CONFIG_MAX_LCORES=128 00:05:03.128 +++ CONFIG_ARCH=native 00:05:03.128 +++ CONFIG_TSAN=n 00:05:03.128 +++ CONFIG_VIRTIO=y 00:05:03.128 +++ CONFIG_HAVE_EVP_MAC=n 00:05:03.128 +++ CONFIG_IPSEC_MB=n 00:05:03.128 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:03.128 +++ CONFIG_DPDK_UADK=n 00:05:03.128 +++ CONFIG_ASAN=y 00:05:03.128 +++ CONFIG_SHARED=n 00:05:03.128 +++ CONFIG_VTUNE_DIR= 00:05:03.128 +++ CONFIG_RDMA_SET_TOS=y 00:05:03.128 +++ CONFIG_VBDEV_COMPRESS=n 00:05:03.128 +++ CONFIG_VFIO_USER_DIR= 00:05:03.128 +++ CONFIG_PGO_DIR= 00:05:03.128 +++ CONFIG_FUZZER_LIB= 00:05:03.128 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:03.128 +++ CONFIG_USDT=n 00:05:03.128 +++ CONFIG_HAVE_KEYUTILS=y 00:05:03.128 +++ CONFIG_URING_ZNS=n 00:05:03.128 +++ CONFIG_FC_PATH= 00:05:03.128 +++ CONFIG_COVERAGE=y 00:05:03.128 +++ CONFIG_CUSTOMOCF=n 00:05:03.128 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:03.128 +++ CONFIG_WERROR=y 00:05:03.128 +++ CONFIG_DEBUG=y 00:05:03.128 +++ CONFIG_RDMA=y 00:05:03.128 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:03.128 +++ CONFIG_FUZZER=n 00:05:03.128 +++ CONFIG_FC=n 00:05:03.128 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:03.128 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:03.128 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:03.128 +++ CONFIG_CROSS_PREFIX= 00:05:03.128 +++ CONFIG_PREFIX=/usr/local 00:05:03.128 +++ CONFIG_HAVE_LIBBSD=n 00:05:03.128 +++ CONFIG_UBSAN=y 00:05:03.128 +++ CONFIG_PGO_CAPTURE=n 00:05:03.128 +++ CONFIG_UBLK=n 00:05:03.128 +++ CONFIG_ISAL_CRYPTO=y 00:05:03.128 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:03.128 +++ CONFIG_CRYPTO=n 00:05:03.128 +++ CONFIG_RBD=n 00:05:03.128 +++ CONFIG_LIBDIR= 00:05:03.128 +++ CONFIG_IPSEC_MB_DIR= 00:05:03.128 +++ CONFIG_PGO_USE=n 00:05:03.128 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:03.128 +++ CONFIG_GOLANG=n 00:05:03.128 +++ CONFIG_VHOST=y 00:05:03.128 +++ CONFIG_IDXD=y 00:05:03.128 +++ CONFIG_AVAHI=n 00:05:03.128 +++ CONFIG_URING=n 00:05:03.128 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:03.128 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:03.128 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:03.128 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:03.128 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:03.128 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:03.128 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:03.128 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:03.128 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:03.128 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:03.128 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:03.128 +++ VHOST_APP=("$_app_dir/vhost") 00:05:03.128 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:03.128 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:03.128 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:03.128 +++ [[ #ifndef SPDK_CONFIG_H 00:05:03.128 #define SPDK_CONFIG_H 00:05:03.128 #define SPDK_CONFIG_APPS 1 00:05:03.128 #define SPDK_CONFIG_ARCH native 00:05:03.128 #define SPDK_CONFIG_ASAN 1 00:05:03.128 #undef SPDK_CONFIG_AVAHI 00:05:03.128 #undef SPDK_CONFIG_CET 00:05:03.128 #define SPDK_CONFIG_COVERAGE 1 00:05:03.128 #define SPDK_CONFIG_CROSS_PREFIX 00:05:03.128 #undef SPDK_CONFIG_CRYPTO 00:05:03.128 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:03.128 #undef SPDK_CONFIG_CUSTOMOCF 00:05:03.128 #undef SPDK_CONFIG_DAOS 00:05:03.128 #define SPDK_CONFIG_DAOS_DIR 00:05:03.128 #define SPDK_CONFIG_DEBUG 1 00:05:03.128 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:03.128 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:03.128 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:03.128 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:03.128 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:03.128 #undef SPDK_CONFIG_DPDK_UADK 00:05:03.128 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:03.128 #define SPDK_CONFIG_EXAMPLES 1 00:05:03.128 #undef SPDK_CONFIG_FC 00:05:03.128 #define SPDK_CONFIG_FC_PATH 00:05:03.128 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:03.128 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:03.128 #undef SPDK_CONFIG_FUSE 00:05:03.128 #undef SPDK_CONFIG_FUZZER 00:05:03.128 #define SPDK_CONFIG_FUZZER_LIB 00:05:03.128 #undef SPDK_CONFIG_GOLANG 00:05:03.128 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:03.128 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:05:03.128 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:03.128 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:03.128 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:03.129 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:03.129 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:03.129 #define SPDK_CONFIG_IDXD 1 00:05:03.129 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:03.129 #undef SPDK_CONFIG_IPSEC_MB 00:05:03.129 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:03.129 #define SPDK_CONFIG_ISAL 1 00:05:03.129 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:03.129 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:03.129 #define SPDK_CONFIG_LIBDIR 00:05:03.129 #undef SPDK_CONFIG_LTO 00:05:03.129 #define SPDK_CONFIG_MAX_LCORES 128 00:05:03.129 #define SPDK_CONFIG_NVME_CUSE 1 00:05:03.129 #undef SPDK_CONFIG_OCF 00:05:03.129 #define SPDK_CONFIG_OCF_PATH 00:05:03.129 #define SPDK_CONFIG_OPENSSL_PATH 00:05:03.129 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:03.129 #define SPDK_CONFIG_PGO_DIR 00:05:03.129 #undef SPDK_CONFIG_PGO_USE 00:05:03.129 #define SPDK_CONFIG_PREFIX /usr/local 00:05:03.129 #define SPDK_CONFIG_RAID5F 1 00:05:03.129 #undef SPDK_CONFIG_RBD 00:05:03.129 #define SPDK_CONFIG_RDMA 1 00:05:03.129 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:03.129 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:03.129 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:03.129 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:03.129 #undef SPDK_CONFIG_SHARED 00:05:03.129 #undef SPDK_CONFIG_SMA 00:05:03.129 #define SPDK_CONFIG_TESTS 1 00:05:03.129 #undef SPDK_CONFIG_TSAN 00:05:03.129 #undef SPDK_CONFIG_UBLK 00:05:03.129 #define SPDK_CONFIG_UBSAN 1 00:05:03.129 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:03.129 #undef SPDK_CONFIG_URING 00:05:03.129 #define SPDK_CONFIG_URING_PATH 00:05:03.129 #undef SPDK_CONFIG_URING_ZNS 00:05:03.129 #undef SPDK_CONFIG_USDT 00:05:03.129 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:03.129 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:03.129 #undef SPDK_CONFIG_VFIO_USER 00:05:03.129 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:03.129 #define SPDK_CONFIG_VHOST 1 00:05:03.129 #define SPDK_CONFIG_VIRTIO 1 00:05:03.129 #undef SPDK_CONFIG_VTUNE 00:05:03.129 #define SPDK_CONFIG_VTUNE_DIR 00:05:03.129 #define SPDK_CONFIG_WERROR 1 00:05:03.129 #define SPDK_CONFIG_WPDK_DIR 00:05:03.129 #undef SPDK_CONFIG_XNVME 00:05:03.129 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:03.129 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:03.129 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.129 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:03.129 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.129 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.129 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:03.129 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:03.129 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:03.129 ++++ export PATH 00:05:03.129 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:03.129 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:03.129 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:03.129 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:03.129 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:03.129 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:03.129 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:03.129 +++ TEST_TAG=N/A 00:05:03.129 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:03.129 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:05:03.129 ++++ uname -s 00:05:03.129 +++ PM_OS=Linux 00:05:03.129 +++ MONITOR_RESOURCES_SUDO=() 00:05:03.129 +++ declare -A MONITOR_RESOURCES_SUDO 00:05:03.129 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:03.129 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:03.129 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:03.129 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:03.129 +++ SUDO[0]= 00:05:03.129 +++ SUDO[1]='sudo -E' 00:05:03.129 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:03.129 +++ [[ Linux == FreeBSD ]] 00:05:03.129 +++ [[ Linux == Linux ]] 00:05:03.129 +++ [[ QEMU != QEMU ]] 00:05:03.129 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:05:03.129 ++ : 1 00:05:03.129 ++ export RUN_NIGHTLY 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_RUN_VALGRIND 00:05:03.129 ++ : 1 00:05:03.129 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:03.129 ++ : 1 00:05:03.129 ++ export SPDK_TEST_UNITTEST 00:05:03.129 ++ : 00:05:03.129 ++ export SPDK_TEST_AUTOBUILD 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_RELEASE_BUILD 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_ISAL 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_ISCSI 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:03.129 ++ : 1 00:05:03.129 ++ export SPDK_TEST_NVME 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_NVME_PMR 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_NVME_BP 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_NVME_CLI 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_NVME_CUSE 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_NVME_FDP 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_NVMF 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_VFIOUSER 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_FUZZER 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_FUZZER_SHORT 00:05:03.129 ++ : rdma 00:05:03.129 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_RBD 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_VHOST 00:05:03.129 ++ : 1 00:05:03.129 ++ export SPDK_TEST_BLOCKDEV 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_IOAT 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_BLOBFS 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_VHOST_INIT 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_LVOL 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:03.129 ++ : 1 00:05:03.129 ++ export SPDK_RUN_ASAN 00:05:03.129 ++ : 1 00:05:03.129 ++ export SPDK_RUN_UBSAN 00:05:03.129 ++ : 00:05:03.129 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_RUN_NON_ROOT 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_CRYPTO 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_FTL 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_OCF 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_VMD 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_OPAL 00:05:03.129 ++ : 00:05:03.129 ++ export SPDK_TEST_NATIVE_DPDK 00:05:03.129 ++ : true 00:05:03.129 ++ export SPDK_AUTOTEST_X 00:05:03.129 ++ : 1 00:05:03.129 ++ export SPDK_TEST_RAID5 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_URING 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_USDT 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_USE_IGB_UIO 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_SCHEDULER 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_SCANBUILD 00:05:03.129 ++ : 00:05:03.129 ++ export SPDK_TEST_NVMF_NICS 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_SMA 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_DAOS 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_XNVME 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_ACCEL_DSA 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_ACCEL_IAA 00:05:03.129 ++ : 00:05:03.129 ++ export SPDK_TEST_FUZZER_TARGET 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_TEST_NVMF_MDNS 00:05:03.129 ++ : 0 00:05:03.129 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:03.129 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:03.129 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:03.129 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:03.129 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:05:03.129 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:03.129 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:03.129 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:03.129 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:03.129 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:03.129 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:03.129 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:03.129 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:03.129 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:03.129 ++ PYTHONDONTWRITEBYTECODE=1 00:05:03.129 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:03.129 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:03.129 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:03.129 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:03.129 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:03.129 ++ rm -rf /var/tmp/asan_suppression_file 00:05:03.129 ++ cat 00:05:03.129 ++ echo leak:libfuse3.so 00:05:03.129 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:03.130 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:03.130 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:03.130 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:03.130 ++ '[' -z /var/spdk/dependencies ']' 00:05:03.130 ++ export DEPENDENCY_DIR 00:05:03.130 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:03.130 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:03.130 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:03.130 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:03.130 ++ export QEMU_BIN= 00:05:03.130 ++ QEMU_BIN= 00:05:03.130 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:03.130 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:03.130 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:03.130 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:03.130 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:03.130 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:03.130 ++ '[' 0 -eq 0 ']' 00:05:03.130 ++ export valgrind= 00:05:03.130 ++ valgrind= 00:05:03.130 +++ uname -s 00:05:03.130 ++ '[' Linux = Linux ']' 00:05:03.130 ++ HUGEMEM=4096 00:05:03.130 ++ export CLEAR_HUGE=yes 00:05:03.130 ++ CLEAR_HUGE=yes 00:05:03.130 ++ [[ 0 -eq 1 ]] 00:05:03.130 ++ [[ 0 -eq 1 ]] 00:05:03.130 ++ MAKE=make 00:05:03.130 +++ nproc 00:05:03.130 ++ MAKEFLAGS=-j10 00:05:03.130 ++ export HUGEMEM=4096 00:05:03.130 ++ HUGEMEM=4096 00:05:03.130 ++ NO_HUGE=() 00:05:03.130 ++ TEST_MODE= 00:05:03.130 ++ [[ -z '' ]] 00:05:03.130 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:03.130 ++ exec 00:05:03.130 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:03.130 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:03.130 ++ set_test_storage 2147483648 00:05:03.130 ++ [[ -v testdir ]] 00:05:03.130 ++ local requested_size=2147483648 00:05:03.130 ++ local mount target_dir 00:05:03.130 ++ local -A mounts fss sizes avails uses 00:05:03.130 ++ local source fs size avail mount use 00:05:03.130 ++ local storage_fallback storage_candidates 00:05:03.130 +++ mktemp -udt spdk.XXXXXX 00:05:03.130 ++ storage_fallback=/tmp/spdk.w76JCe 00:05:03.130 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:03.130 ++ [[ -n '' ]] 00:05:03.130 ++ [[ -n '' ]] 00:05:03.130 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.w76JCe/tests/unit /tmp/spdk.w76JCe 00:05:03.130 ++ requested_size=2214592512 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 +++ df -T 00:05:03.130 +++ grep -v Filesystem 00:05:03.130 ++ mounts["$mount"]=udev 00:05:03.130 ++ fss["$mount"]=devtmpfs 00:05:03.130 ++ avails["$mount"]=6224461824 00:05:03.130 ++ sizes["$mount"]=6224461824 00:05:03.130 ++ uses["$mount"]=0 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=tmpfs 00:05:03.130 ++ fss["$mount"]=tmpfs 00:05:03.130 ++ avails["$mount"]=1253408768 00:05:03.130 ++ sizes["$mount"]=1254514688 00:05:03.130 ++ uses["$mount"]=1105920 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=/dev/vda1 00:05:03.130 ++ fss["$mount"]=ext4 00:05:03.130 ++ avails["$mount"]=10434117632 00:05:03.130 ++ sizes["$mount"]=20616794112 00:05:03.130 ++ uses["$mount"]=10165899264 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=tmpfs 00:05:03.130 ++ fss["$mount"]=tmpfs 00:05:03.130 ++ avails["$mount"]=6272561152 00:05:03.130 ++ sizes["$mount"]=6272561152 00:05:03.130 ++ uses["$mount"]=0 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=tmpfs 00:05:03.130 ++ fss["$mount"]=tmpfs 00:05:03.130 ++ avails["$mount"]=5242880 00:05:03.130 ++ sizes["$mount"]=5242880 00:05:03.130 ++ uses["$mount"]=0 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=tmpfs 00:05:03.130 ++ fss["$mount"]=tmpfs 00:05:03.130 ++ avails["$mount"]=6272561152 00:05:03.130 ++ sizes["$mount"]=6272561152 00:05:03.130 ++ uses["$mount"]=0 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=/dev/loop1 00:05:03.130 ++ fss["$mount"]=squashfs 00:05:03.130 ++ avails["$mount"]=0 00:05:03.130 ++ sizes["$mount"]=96337920 00:05:03.130 ++ uses["$mount"]=96337920 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=/dev/loop0 00:05:03.130 ++ fss["$mount"]=squashfs 00:05:03.130 ++ avails["$mount"]=0 00:05:03.130 ++ sizes["$mount"]=67108864 00:05:03.130 ++ uses["$mount"]=67108864 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=/dev/vda15 00:05:03.130 ++ fss["$mount"]=vfat 00:05:03.130 ++ avails["$mount"]=103089152 00:05:03.130 ++ sizes["$mount"]=109422592 00:05:03.130 ++ uses["$mount"]=6334464 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=/dev/loop2 00:05:03.130 ++ fss["$mount"]=squashfs 00:05:03.130 ++ avails["$mount"]=0 00:05:03.130 ++ sizes["$mount"]=41025536 00:05:03.130 ++ uses["$mount"]=41025536 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=tmpfs 00:05:03.130 ++ fss["$mount"]=tmpfs 00:05:03.130 ++ avails["$mount"]=1254510592 00:05:03.130 ++ sizes["$mount"]=1254510592 00:05:03.130 ++ uses["$mount"]=0 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:05:03.130 ++ fss["$mount"]=fuse.sshfs 00:05:03.130 ++ avails["$mount"]=97266778112 00:05:03.130 ++ sizes["$mount"]=105088212992 00:05:03.130 ++ uses["$mount"]=2436001792 00:05:03.130 ++ read -r source fs size use avail _ mount 00:05:03.130 ++ printf '* Looking for test storage...\n' 00:05:03.130 * Looking for test storage... 00:05:03.130 ++ local target_space new_size 00:05:03.130 ++ for target_dir in "${storage_candidates[@]}" 00:05:03.130 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:03.130 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:03.130 ++ mount=/ 00:05:03.130 ++ target_space=10434117632 00:05:03.130 ++ (( target_space == 0 || target_space < requested_size )) 00:05:03.130 ++ (( target_space >= requested_size )) 00:05:03.130 ++ [[ ext4 == tmpfs ]] 00:05:03.130 ++ [[ ext4 == ramfs ]] 00:05:03.130 ++ [[ / == / ]] 00:05:03.130 ++ new_size=12380491776 00:05:03.130 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:03.130 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:03.130 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:03.130 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:03.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:03.130 ++ return 0 00:05:03.130 ++ set -o errtrace 00:05:03.130 ++ shopt -s extdebug 00:05:03.130 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:03.130 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:03.130 11:16:37 unittest -- common/autotest_common.sh@1687 -- # true 00:05:03.130 11:16:37 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:05:03.130 11:16:37 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:03.130 11:16:37 unittest -- common/autotest_common.sh@29 -- # exec 00:05:03.130 11:16:37 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:03.130 11:16:37 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:03.130 11:16:37 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:03.130 11:16:37 unittest -- common/autotest_common.sh@18 -- # set -x 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@181 -- # hash lcov 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:05:03.130 --rc lcov_branch_coverage=1 00:05:03.130 --rc lcov_function_coverage=1 00:05:03.130 --rc genhtml_branch_coverage=1 00:05:03.130 --rc genhtml_function_coverage=1 00:05:03.130 --rc genhtml_legend=1 00:05:03.130 --rc geninfo_all_blocks=1 00:05:03.130 ' 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:05:03.130 --rc lcov_branch_coverage=1 00:05:03.130 --rc lcov_function_coverage=1 00:05:03.130 --rc genhtml_branch_coverage=1 00:05:03.130 --rc genhtml_function_coverage=1 00:05:03.130 --rc genhtml_legend=1 00:05:03.130 --rc geninfo_all_blocks=1 00:05:03.130 ' 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:05:03.130 --rc lcov_branch_coverage=1 00:05:03.130 --rc lcov_function_coverage=1 00:05:03.130 --rc genhtml_branch_coverage=1 00:05:03.130 --rc genhtml_function_coverage=1 00:05:03.130 --rc genhtml_legend=1 00:05:03.130 --rc geninfo_all_blocks=1 00:05:03.130 --no-external' 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:05:03.130 --rc lcov_branch_coverage=1 00:05:03.130 --rc lcov_function_coverage=1 00:05:03.130 --rc genhtml_branch_coverage=1 00:05:03.130 --rc genhtml_function_coverage=1 00:05:03.130 --rc genhtml_legend=1 00:05:03.130 --rc geninfo_all_blocks=1 00:05:03.130 --no-external' 00:05:03.130 11:16:37 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:05.034 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:05.034 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:05.292 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:05.292 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:05.292 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:05.292 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:05.292 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:05.292 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:05.292 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:05.292 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:05.292 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:05.292 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:05.293 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:05.293 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:05.551 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:05.551 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:01.810 11:17:31 unittest -- unit/unittest.sh@208 -- # uname -m 00:06:01.810 11:17:31 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:06:01.810 11:17:31 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:01.810 ************************************ 00:06:01.810 START TEST unittest_pci_event 00:06:01.810 ************************************ 00:06:01.810 11:17:31 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:01.810 00:06:01.810 00:06:01.810 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.810 http://cunit.sourceforge.net/ 00:06:01.810 00:06:01.810 00:06:01.810 Suite: pci_event 00:06:01.810 Test: test_pci_parse_event ...[2024-07-13 11:17:31.123297] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:01.810 [2024-07-13 11:17:31.123735] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:01.810 passed 00:06:01.810 00:06:01.810 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.810 suites 1 1 n/a 0 0 00:06:01.810 tests 1 1 1 0 0 00:06:01.810 asserts 15 15 15 0 n/a 00:06:01.810 00:06:01.810 Elapsed time = 0.001 seconds 00:06:01.810 00:06:01.810 real 0m0.040s 00:06:01.810 user 0m0.021s 00:06:01.810 sys 0m0.016s 00:06:01.810 11:17:31 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.810 11:17:31 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:06:01.810 ************************************ 00:06:01.810 END TEST unittest_pci_event 00:06:01.810 ************************************ 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:01.810 11:17:31 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:01.810 ************************************ 00:06:01.810 START TEST unittest_include 00:06:01.810 ************************************ 00:06:01.810 11:17:31 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:01.810 00:06:01.810 00:06:01.810 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.810 http://cunit.sourceforge.net/ 00:06:01.810 00:06:01.810 00:06:01.810 Suite: histogram 00:06:01.810 Test: histogram_test ...passed 00:06:01.810 Test: histogram_merge ...passed 00:06:01.810 00:06:01.810 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.810 suites 1 1 n/a 0 0 00:06:01.810 tests 2 2 2 0 0 00:06:01.810 asserts 50 50 50 0 n/a 00:06:01.810 00:06:01.810 Elapsed time = 0.006 seconds 00:06:01.810 00:06:01.810 real 0m0.037s 00:06:01.810 user 0m0.032s 00:06:01.810 sys 0m0.004s 00:06:01.810 11:17:31 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.810 ************************************ 00:06:01.810 END TEST unittest_include 00:06:01.810 11:17:31 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:06:01.810 ************************************ 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:01.810 11:17:31 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.810 11:17:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:01.810 ************************************ 00:06:01.810 START TEST unittest_bdev 00:06:01.810 ************************************ 00:06:01.811 11:17:31 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:06:01.811 11:17:31 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:01.811 00:06:01.811 00:06:01.811 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.811 http://cunit.sourceforge.net/ 00:06:01.811 00:06:01.811 00:06:01.811 Suite: bdev 00:06:01.811 Test: bytes_to_blocks_test ...passed 00:06:01.811 Test: num_blocks_test ...passed 00:06:01.811 Test: io_valid_test ...passed 00:06:01.811 Test: open_write_test ...[2024-07-13 11:17:31.381045] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:01.811 [2024-07-13 11:17:31.381573] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:01.811 [2024-07-13 11:17:31.382014] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:01.811 passed 00:06:01.811 Test: claim_test ...passed 00:06:01.811 Test: alias_add_del_test ...[2024-07-13 11:17:31.475373] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:01.811 [2024-07-13 11:17:31.475666] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:01.811 [2024-07-13 11:17:31.475787] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:01.811 passed 00:06:01.811 Test: get_device_stat_test ...passed 00:06:01.811 Test: bdev_io_types_test ...passed 00:06:01.811 Test: bdev_io_wait_test ...passed 00:06:01.811 Test: bdev_io_spans_split_test ...passed 00:06:01.811 Test: bdev_io_boundary_split_test ...passed 00:06:01.811 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-13 11:17:31.648242] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:01.811 passed 00:06:01.811 Test: bdev_io_mix_split_test ...passed 00:06:01.811 Test: bdev_io_split_with_io_wait ...passed 00:06:01.811 Test: bdev_io_write_unit_split_test ...[2024-07-13 11:17:31.754336] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:01.811 [2024-07-13 11:17:31.754588] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:01.811 [2024-07-13 11:17:31.754673] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:01.811 [2024-07-13 11:17:31.754845] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:01.811 passed 00:06:01.811 Test: bdev_io_alignment_with_boundary ...passed 00:06:01.811 Test: bdev_io_alignment ...passed 00:06:01.811 Test: bdev_histograms ...passed 00:06:01.811 Test: bdev_write_zeroes ...passed 00:06:01.811 Test: bdev_compare_and_write ...passed 00:06:01.811 Test: bdev_compare ...passed 00:06:01.811 Test: bdev_compare_emulated ...passed 00:06:01.811 Test: bdev_zcopy_write ...passed 00:06:01.811 Test: bdev_zcopy_read ...passed 00:06:01.811 Test: bdev_open_while_hotremove ...passed 00:06:01.811 Test: bdev_close_while_hotremove ...passed 00:06:01.811 Test: bdev_open_ext_test ...[2024-07-13 11:17:32.184901] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:01.811 passed 00:06:01.811 Test: bdev_open_ext_unregister ...[2024-07-13 11:17:32.185456] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:01.811 passed 00:06:01.811 Test: bdev_set_io_timeout ...passed 00:06:01.811 Test: bdev_set_qd_sampling ...passed 00:06:01.811 Test: lba_range_overlap ...passed 00:06:01.811 Test: lock_lba_range_check_ranges ...passed 00:06:01.811 Test: lock_lba_range_with_io_outstanding ...passed 00:06:01.811 Test: lock_lba_range_overlapped ...passed 00:06:01.811 Test: bdev_quiesce ...[2024-07-13 11:17:32.380349] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10107:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:01.811 passed 00:06:01.811 Test: bdev_io_abort ...passed 00:06:01.811 Test: bdev_unmap ...passed 00:06:01.811 Test: bdev_write_zeroes_split_test ...passed 00:06:01.811 Test: bdev_set_options_test ...[2024-07-13 11:17:32.512195] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:01.811 passed 00:06:01.811 Test: bdev_get_memory_domains ...passed 00:06:01.811 Test: bdev_io_ext ...passed 00:06:01.811 Test: bdev_io_ext_no_opts ...passed 00:06:01.811 Test: bdev_io_ext_invalid_opts ...passed 00:06:01.811 Test: bdev_io_ext_split ...passed 00:06:01.811 Test: bdev_io_ext_bounce_buffer ...passed 00:06:01.811 Test: bdev_register_uuid_alias ...[2024-07-13 11:17:32.722288] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 27b9683a-3e67-4bd6-a8f4-cd8fc00b2ff8 already exists 00:06:01.811 [2024-07-13 11:17:32.722507] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:27b9683a-3e67-4bd6-a8f4-cd8fc00b2ff8 alias for bdev bdev0 00:06:01.811 passed 00:06:01.811 Test: bdev_unregister_by_name ...[2024-07-13 11:17:32.743822] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:01.811 [2024-07-13 11:17:32.743980] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7982:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:01.811 passed 00:06:01.811 Test: for_each_bdev_test ...passed 00:06:01.811 Test: bdev_seek_test ...passed 00:06:01.811 Test: bdev_copy ...passed 00:06:01.811 Test: bdev_copy_split_test ...passed 00:06:01.811 Test: examine_locks ...passed 00:06:01.811 Test: claim_v2_rwo ...[2024-07-13 11:17:32.855208] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.855421] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8708:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.855585] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.855788] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.855951] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.856203] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:01.811 passed 00:06:01.811 Test: claim_v2_rom ...[2024-07-13 11:17:32.856715] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.856910] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.857073] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.857250] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.857449] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8746:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:01.811 [2024-07-13 11:17:32.857648] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:01.811 passed 00:06:01.811 Test: claim_v2_rwm ...[2024-07-13 11:17:32.858161] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:01.811 [2024-07-13 11:17:32.858359] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.858523] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.858688] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.858872] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.859028] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8796:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.859265] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:01.811 passed 00:06:01.811 Test: claim_v2_existing_writer ...[2024-07-13 11:17:32.859791] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:01.811 [2024-07-13 11:17:32.859917] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:01.811 passed 00:06:01.811 Test: claim_v2_existing_v1 ...[2024-07-13 11:17:32.860363] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.860487] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.860560] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:01.811 passed 00:06:01.811 Test: claim_v1_existing_v2 ...[2024-07-13 11:17:32.860935] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.861185] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:01.811 [2024-07-13 11:17:32.861351] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:01.811 passed 00:06:01.811 Test: examine_claimed ...[2024-07-13 11:17:32.862019] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:01.811 passed 00:06:01.811 00:06:01.811 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.811 suites 1 1 n/a 0 0 00:06:01.811 tests 59 59 59 0 0 00:06:01.811 asserts 4599 4599 4599 0 n/a 00:06:01.811 00:06:01.811 Elapsed time = 1.536 seconds 00:06:01.811 11:17:32 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:01.811 00:06:01.811 00:06:01.811 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.811 http://cunit.sourceforge.net/ 00:06:01.811 00:06:01.811 00:06:01.811 Suite: nvme 00:06:01.811 Test: test_create_ctrlr ...passed 00:06:01.811 Test: test_reset_ctrlr ...[2024-07-13 11:17:32.914393] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.811 passed 00:06:01.811 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:01.811 Test: test_failover_ctrlr ...passed 00:06:01.812 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-13 11:17:32.917042] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.917249] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.917432] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 passed 00:06:01.812 Test: test_pending_reset ...[2024-07-13 11:17:32.919065] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.919363] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 passed 00:06:01.812 Test: test_attach_ctrlr ...[2024-07-13 11:17:32.920507] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:01.812 passed 00:06:01.812 Test: test_aer_cb ...passed 00:06:01.812 Test: test_submit_nvme_cmd ...passed 00:06:01.812 Test: test_add_remove_trid ...passed 00:06:01.812 Test: test_abort ...[2024-07-13 11:17:32.924005] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:01.812 passed 00:06:01.812 Test: test_get_io_qpair ...passed 00:06:01.812 Test: test_bdev_unregister ...passed 00:06:01.812 Test: test_compare_ns ...passed 00:06:01.812 Test: test_init_ana_log_page ...passed 00:06:01.812 Test: test_get_memory_domains ...passed 00:06:01.812 Test: test_reconnect_qpair ...[2024-07-13 11:17:32.926880] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 passed 00:06:01.812 Test: test_create_bdev_ctrlr ...[2024-07-13 11:17:32.927434] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:01.812 passed 00:06:01.812 Test: test_add_multi_ns_to_bdev ...[2024-07-13 11:17:32.928831] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:01.812 passed 00:06:01.812 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:01.812 Test: test_admin_path ...passed 00:06:01.812 Test: test_reset_bdev_ctrlr ...passed 00:06:01.812 Test: test_find_io_path ...passed 00:06:01.812 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:01.812 Test: test_retry_io_for_io_path_error ...passed 00:06:01.812 Test: test_retry_io_count ...passed 00:06:01.812 Test: test_concurrent_read_ana_log_page ...passed 00:06:01.812 Test: test_retry_io_for_ana_error ...passed 00:06:01.812 Test: test_check_io_error_resiliency_params ...[2024-07-13 11:17:32.936271] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:01.812 [2024-07-13 11:17:32.936345] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:01.812 [2024-07-13 11:17:32.936371] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:01.812 [2024-07-13 11:17:32.936420] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:01.812 [2024-07-13 11:17:32.936442] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:01.812 [2024-07-13 11:17:32.936502] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:01.812 [2024-07-13 11:17:32.936525] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:01.812 passed 00:06:01.812 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-13 11:17:32.936568] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:01.812 [2024-07-13 11:17:32.936603] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:01.812 passed 00:06:01.812 Test: test_reconnect_ctrlr ...[2024-07-13 11:17:32.937495] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.937633] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.937932] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.938075] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.938200] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 passed 00:06:01.812 Test: test_retry_failover_ctrlr ...[2024-07-13 11:17:32.938551] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 passed 00:06:01.812 Test: test_fail_path ...[2024-07-13 11:17:32.939102] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.939257] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.939378] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.939510] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.939658] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 passed 00:06:01.812 Test: test_nvme_ns_cmp ...passed 00:06:01.812 Test: test_ana_transition ...passed 00:06:01.812 Test: test_set_preferred_path ...passed 00:06:01.812 Test: test_find_next_io_path ...passed 00:06:01.812 Test: test_find_io_path_min_qd ...passed 00:06:01.812 Test: test_disable_auto_failback ...[2024-07-13 11:17:32.941375] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 passed 00:06:01.812 Test: test_set_multipath_policy ...passed 00:06:01.812 Test: test_uuid_generation ...passed 00:06:01.812 Test: test_retry_io_to_same_path ...passed 00:06:01.812 Test: test_race_between_reset_and_disconnected ...passed 00:06:01.812 Test: test_ctrlr_op_rpc ...passed 00:06:01.812 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:01.812 Test: test_disable_enable_ctrlr ...[2024-07-13 11:17:32.945271] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 [2024-07-13 11:17:32.945450] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:01.812 passed 00:06:01.812 Test: test_delete_ctrlr_done ...passed 00:06:01.812 Test: test_ns_remove_during_reset ...passed 00:06:01.812 Test: test_io_path_is_current ...passed 00:06:01.812 00:06:01.812 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.812 suites 1 1 n/a 0 0 00:06:01.812 tests 49 49 49 0 0 00:06:01.812 asserts 3577 3577 3577 0 n/a 00:06:01.812 00:06:01.812 Elapsed time = 0.034 seconds 00:06:01.812 11:17:32 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:01.812 00:06:01.812 00:06:01.812 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.812 http://cunit.sourceforge.net/ 00:06:01.812 00:06:01.812 Test Options 00:06:01.812 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:01.812 00:06:01.812 Suite: raid 00:06:01.812 Test: test_create_raid ...passed 00:06:01.812 Test: test_create_raid_superblock ...passed 00:06:01.812 Test: test_delete_raid ...passed 00:06:01.812 Test: test_create_raid_invalid_args ...[2024-07-13 11:17:32.992231] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:01.812 [2024-07-13 11:17:32.992749] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:01.812 [2024-07-13 11:17:32.993412] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:01.812 [2024-07-13 11:17:32.993663] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:01.812 [2024-07-13 11:17:32.993765] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:01.812 [2024-07-13 11:17:32.994794] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:01.812 [2024-07-13 11:17:32.994867] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:01.812 passed 00:06:01.812 Test: test_delete_raid_invalid_args ...passed 00:06:01.812 Test: test_io_channel ...passed 00:06:01.812 Test: test_reset_io ...passed 00:06:01.812 Test: test_multi_raid ...passed 00:06:01.812 Test: test_io_type_supported ...passed 00:06:01.812 Test: test_raid_json_dump_info ...passed 00:06:01.812 Test: test_context_size ...passed 00:06:01.812 Test: test_raid_level_conversions ...passed 00:06:01.812 Test: test_raid_io_split ...passed 00:06:01.812 Test: test_raid_process ...passed 00:06:01.812 00:06:01.812 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.812 suites 1 1 n/a 0 0 00:06:01.812 tests 14 14 14 0 0 00:06:01.812 asserts 6183 6183 6183 0 n/a 00:06:01.812 00:06:01.812 Elapsed time = 0.024 seconds 00:06:01.812 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:01.812 00:06:01.812 00:06:01.812 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.812 http://cunit.sourceforge.net/ 00:06:01.812 00:06:01.812 00:06:01.812 Suite: raid_sb 00:06:01.812 Test: test_raid_bdev_write_superblock ...passed 00:06:01.812 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:01.812 Test: test_raid_bdev_parse_superblock ...[2024-07-13 11:17:33.053595] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:01.812 passed 00:06:01.812 Suite: raid_sb_md 00:06:01.812 Test: test_raid_bdev_write_superblock ...passed 00:06:01.812 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:01.812 Test: test_raid_bdev_parse_superblock ...passed 00:06:01.812 Suite: raid_sb_md_interleaved 00:06:01.812 Test: test_raid_bdev_write_superblock ...[2024-07-13 11:17:33.054035] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:01.812 passed 00:06:01.813 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:01.813 Test: test_raid_bdev_parse_superblock ...[2024-07-13 11:17:33.054302] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:01.813 passed 00:06:01.813 00:06:01.813 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.813 suites 3 3 n/a 0 0 00:06:01.813 tests 9 9 9 0 0 00:06:01.813 asserts 139 139 139 0 n/a 00:06:01.813 00:06:01.813 Elapsed time = 0.001 seconds 00:06:01.813 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:01.813 00:06:01.813 00:06:01.813 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.813 http://cunit.sourceforge.net/ 00:06:01.813 00:06:01.813 00:06:01.813 Suite: concat 00:06:01.813 Test: test_concat_start ...passed 00:06:01.813 Test: test_concat_rw ...passed 00:06:01.813 Test: test_concat_null_payload ...passed 00:06:01.813 00:06:01.813 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.813 suites 1 1 n/a 0 0 00:06:01.813 tests 3 3 3 0 0 00:06:01.813 asserts 8460 8460 8460 0 n/a 00:06:01.813 00:06:01.813 Elapsed time = 0.008 seconds 00:06:01.813 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:06:01.813 00:06:01.813 00:06:01.813 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.813 http://cunit.sourceforge.net/ 00:06:01.813 00:06:01.813 00:06:01.813 Suite: raid0 00:06:01.813 Test: test_write_io ...passed 00:06:01.813 Test: test_read_io ...passed 00:06:01.813 Test: test_unmap_io ...passed 00:06:01.813 Test: test_io_failure ...passed 00:06:01.813 Suite: raid0_dif 00:06:01.813 Test: test_write_io ...passed 00:06:01.813 Test: test_read_io ...passed 00:06:01.813 Test: test_unmap_io ...passed 00:06:01.813 Test: test_io_failure ...passed 00:06:01.813 00:06:01.813 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.813 suites 2 2 n/a 0 0 00:06:01.813 tests 8 8 8 0 0 00:06:01.813 asserts 368291 368291 368291 0 n/a 00:06:01.813 00:06:01.813 Elapsed time = 0.143 seconds 00:06:01.813 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:01.813 00:06:01.813 00:06:01.813 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.813 http://cunit.sourceforge.net/ 00:06:01.813 00:06:01.813 00:06:01.813 Suite: raid1 00:06:01.813 Test: test_raid1_start ...passed 00:06:01.813 Test: test_raid1_read_balancing ...passed 00:06:01.813 Test: test_raid1_write_error ...passed 00:06:01.813 Test: test_raid1_read_error ...passed 00:06:01.813 00:06:01.813 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.813 suites 1 1 n/a 0 0 00:06:01.813 tests 4 4 4 0 0 00:06:01.813 asserts 4374 4374 4374 0 n/a 00:06:01.813 00:06:01.813 Elapsed time = 0.006 seconds 00:06:01.813 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:01.813 00:06:01.813 00:06:01.813 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.813 http://cunit.sourceforge.net/ 00:06:01.813 00:06:01.813 00:06:01.813 Suite: zone 00:06:01.813 Test: test_zone_get_operation ...passed 00:06:01.813 Test: test_bdev_zone_get_info ...passed 00:06:01.813 Test: test_bdev_zone_management ...passed 00:06:01.813 Test: test_bdev_zone_append ...passed 00:06:01.813 Test: test_bdev_zone_append_with_md ...passed 00:06:01.813 Test: test_bdev_zone_appendv ...passed 00:06:01.813 Test: test_bdev_zone_appendv_with_md ...passed 00:06:01.813 Test: test_bdev_io_get_append_location ...passed 00:06:01.813 00:06:01.813 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.813 suites 1 1 n/a 0 0 00:06:01.813 tests 8 8 8 0 0 00:06:01.813 asserts 94 94 94 0 n/a 00:06:01.813 00:06:01.813 Elapsed time = 0.000 seconds 00:06:01.813 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:01.813 00:06:01.813 00:06:01.813 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.813 http://cunit.sourceforge.net/ 00:06:01.813 00:06:01.813 00:06:01.813 Suite: gpt_parse 00:06:01.813 Test: test_parse_mbr_and_primary ...[2024-07-13 11:17:33.395880] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:01.813 [2024-07-13 11:17:33.396213] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:01.813 [2024-07-13 11:17:33.396256] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:01.813 [2024-07-13 11:17:33.396323] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:01.813 [2024-07-13 11:17:33.396358] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:01.813 [2024-07-13 11:17:33.396425] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:01.813 passed 00:06:01.813 Test: test_parse_secondary ...[2024-07-13 11:17:33.397190] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:01.813 [2024-07-13 11:17:33.397238] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:01.813 [2024-07-13 11:17:33.397275] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:01.813 [2024-07-13 11:17:33.397303] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:01.813 passed 00:06:01.813 Test: test_check_mbr ...[2024-07-13 11:17:33.398068] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:01.813 [2024-07-13 11:17:33.398112] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:01.813 passed 00:06:01.813 Test: test_read_header ...[2024-07-13 11:17:33.398192] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:01.813 [2024-07-13 11:17:33.398301] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:01.813 passed 00:06:01.813 Test: test_read_partitions ...[2024-07-13 11:17:33.398371] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:01.813 [2024-07-13 11:17:33.398407] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:01.813 [2024-07-13 11:17:33.398434] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:01.813 [2024-07-13 11:17:33.398465] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:01.813 [2024-07-13 11:17:33.398519] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:01.813 [2024-07-13 11:17:33.398562] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:01.813 [2024-07-13 11:17:33.398592] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:01.813 [2024-07-13 11:17:33.398613] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:01.813 [2024-07-13 11:17:33.399039] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:01.813 passed 00:06:01.813 00:06:01.813 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.813 suites 1 1 n/a 0 0 00:06:01.813 tests 5 5 5 0 0 00:06:01.813 asserts 33 33 33 0 n/a 00:06:01.813 00:06:01.813 Elapsed time = 0.004 seconds 00:06:01.813 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:01.813 00:06:01.813 00:06:01.813 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.813 http://cunit.sourceforge.net/ 00:06:01.813 00:06:01.813 00:06:01.813 Suite: bdev_part 00:06:01.813 Test: part_test ...[2024-07-13 11:17:33.434839] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 0bcda603-f941-5ca3-bac2-520d3510d166 already exists 00:06:01.813 [2024-07-13 11:17:33.435186] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:0bcda603-f941-5ca3-bac2-520d3510d166 alias for bdev test1 00:06:01.813 passed 00:06:01.813 Test: part_free_test ...passed 00:06:01.813 Test: part_get_io_channel_test ...passed 00:06:01.813 Test: part_construct_ext ...passed 00:06:01.813 00:06:01.813 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.813 suites 1 1 n/a 0 0 00:06:01.813 tests 4 4 4 0 0 00:06:01.813 asserts 48 48 48 0 n/a 00:06:01.813 00:06:01.813 Elapsed time = 0.050 seconds 00:06:01.813 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:01.813 00:06:01.813 00:06:01.813 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.813 http://cunit.sourceforge.net/ 00:06:01.813 00:06:01.813 00:06:01.813 Suite: scsi_nvme_suite 00:06:01.813 Test: scsi_nvme_translate_test ...passed 00:06:01.813 00:06:01.813 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.813 suites 1 1 n/a 0 0 00:06:01.813 tests 1 1 1 0 0 00:06:01.813 asserts 104 104 104 0 n/a 00:06:01.813 00:06:01.813 Elapsed time = 0.000 seconds 00:06:01.813 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:01.813 00:06:01.813 00:06:01.813 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.813 http://cunit.sourceforge.net/ 00:06:01.813 00:06:01.813 00:06:01.813 Suite: lvol 00:06:01.813 Test: ut_lvs_init ...[2024-07-13 11:17:33.555409] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:01.813 [2024-07-13 11:17:33.555806] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:01.813 passed 00:06:01.814 Test: ut_lvol_init ...passed 00:06:01.814 Test: ut_lvol_snapshot ...passed 00:06:01.814 Test: ut_lvol_clone ...passed 00:06:01.814 Test: ut_lvs_destroy ...passed 00:06:01.814 Test: ut_lvs_unload ...passed 00:06:01.814 Test: ut_lvol_resize ...[2024-07-13 11:17:33.557357] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:01.814 passed 00:06:01.814 Test: ut_lvol_set_read_only ...passed 00:06:01.814 Test: ut_lvol_hotremove ...passed 00:06:01.814 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:01.814 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:01.814 Test: ut_lvol_read_write ...passed 00:06:01.814 Test: ut_vbdev_lvol_submit_request ...passed 00:06:01.814 Test: ut_lvol_examine_config ...passed 00:06:01.814 Test: ut_lvol_examine_disk ...[2024-07-13 11:17:33.558026] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:01.814 passed 00:06:01.814 Test: ut_lvol_rename ...[2024-07-13 11:17:33.559184] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:01.814 [2024-07-13 11:17:33.559307] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:01.814 passed 00:06:01.814 Test: ut_bdev_finish ...passed 00:06:01.814 Test: ut_lvs_rename ...passed 00:06:01.814 Test: ut_lvol_seek ...passed 00:06:01.814 Test: ut_esnap_dev_create ...[2024-07-13 11:17:33.560047] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:01.814 [2024-07-13 11:17:33.560117] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:01.814 [2024-07-13 11:17:33.560143] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:01.814 passed 00:06:01.814 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-13 11:17:33.560279] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:01.814 [2024-07-13 11:17:33.560308] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:01.814 passed 00:06:01.814 Test: ut_lvol_shallow_copy ...[2024-07-13 11:17:33.560674] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:06:01.814 [2024-07-13 11:17:33.560717] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:06:01.814 passed 00:06:01.814 Test: ut_lvol_set_external_parent ...[2024-07-13 11:17:33.560831] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:06:01.814 passed 00:06:01.814 00:06:01.814 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.814 suites 1 1 n/a 0 0 00:06:01.814 tests 23 23 23 0 0 00:06:01.814 asserts 770 770 770 0 n/a 00:06:01.814 00:06:01.814 Elapsed time = 0.006 seconds 00:06:01.814 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:01.814 00:06:01.814 00:06:01.814 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.814 http://cunit.sourceforge.net/ 00:06:01.814 00:06:01.814 00:06:01.814 Suite: zone_block 00:06:01.814 Test: test_zone_block_create ...passed 00:06:01.814 Test: test_zone_block_create_invalid ...[2024-07-13 11:17:33.614155] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:01.814 [2024-07-13 11:17:33.614513] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-13 11:17:33.614701] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:01.814 [2024-07-13 11:17:33.614777] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-13 11:17:33.614949] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:01.814 [2024-07-13 11:17:33.614989] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-13 11:17:33.615067] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:01.814 [2024-07-13 11:17:33.615115] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:01.814 Test: test_get_zone_info ...[2024-07-13 11:17:33.615636] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.615716] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.615767] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 passed 00:06:01.814 Test: test_supported_io_types ...passed 00:06:01.814 Test: test_reset_zone ...[2024-07-13 11:17:33.616626] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.616696] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 passed 00:06:01.814 Test: test_open_zone ...[2024-07-13 11:17:33.617179] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.617929] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.618002] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 passed 00:06:01.814 Test: test_zone_write ...[2024-07-13 11:17:33.618542] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:01.814 [2024-07-13 11:17:33.618602] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.618659] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:01.814 [2024-07-13 11:17:33.618704] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.624518] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:01.814 [2024-07-13 11:17:33.624583] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.624651] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:01.814 [2024-07-13 11:17:33.624675] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.630791] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:01.814 [2024-07-13 11:17:33.630893] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 passed 00:06:01.814 Test: test_zone_read ...[2024-07-13 11:17:33.631456] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:01.814 [2024-07-13 11:17:33.631508] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.631567] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:01.814 [2024-07-13 11:17:33.631596] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.632113] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:01.814 [2024-07-13 11:17:33.632159] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 passed 00:06:01.814 Test: test_close_zone ...[2024-07-13 11:17:33.632559] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.632663] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.632896] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 passed 00:06:01.814 Test: test_finish_zone ...[2024-07-13 11:17:33.632953] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.633555] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.633626] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 passed 00:06:01.814 Test: test_append_zone ...[2024-07-13 11:17:33.634041] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:01.814 [2024-07-13 11:17:33.634093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.634171] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:01.814 [2024-07-13 11:17:33.634194] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 [2024-07-13 11:17:33.645762] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:01.814 [2024-07-13 11:17:33.645855] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:01.814 passed 00:06:01.814 00:06:01.814 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.814 suites 1 1 n/a 0 0 00:06:01.814 tests 11 11 11 0 0 00:06:01.815 asserts 3437 3437 3437 0 n/a 00:06:01.815 00:06:01.815 Elapsed time = 0.033 seconds 00:06:01.815 11:17:33 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:01.815 00:06:01.815 00:06:01.815 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.815 http://cunit.sourceforge.net/ 00:06:01.815 00:06:01.815 00:06:01.815 Suite: bdev 00:06:01.815 Test: basic ...[2024-07-13 11:17:33.750223] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5557be9327c1): Operation not permitted (rc=-1) 00:06:01.815 [2024-07-13 11:17:33.750579] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x5557be932780): Operation not permitted (rc=-1) 00:06:01.815 [2024-07-13 11:17:33.750646] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5557be9327c1): Operation not permitted (rc=-1) 00:06:01.815 passed 00:06:01.815 Test: unregister_and_close ...passed 00:06:01.815 Test: unregister_and_close_different_threads ...passed 00:06:01.815 Test: basic_qos ...passed 00:06:01.815 Test: put_channel_during_reset ...passed 00:06:01.815 Test: aborted_reset ...passed 00:06:01.815 Test: aborted_reset_no_outstanding_io ...passed 00:06:01.815 Test: io_during_reset ...passed 00:06:01.815 Test: reset_completions ...passed 00:06:01.815 Test: io_during_qos_queue ...passed 00:06:01.815 Test: io_during_qos_reset ...passed 00:06:01.815 Test: enomem ...passed 00:06:01.815 Test: enomem_multi_bdev ...passed 00:06:01.815 Test: enomem_multi_bdev_unregister ...passed 00:06:01.815 Test: enomem_multi_io_target ...passed 00:06:01.815 Test: qos_dynamic_enable ...passed 00:06:01.815 Test: bdev_histograms_mt ...passed 00:06:01.815 Test: bdev_set_io_timeout_mt ...[2024-07-13 11:17:34.485630] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:01.815 passed 00:06:01.815 Test: lock_lba_range_then_submit_io ...[2024-07-13 11:17:34.503253] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x5557be932740 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:01.815 passed 00:06:01.815 Test: unregister_during_reset ...passed 00:06:01.815 Test: event_notify_and_close ...passed 00:06:01.815 Test: unregister_and_qos_poller ...passed 00:06:01.815 Suite: bdev_wrong_thread 00:06:01.815 Test: spdk_bdev_register_wt ...[2024-07-13 11:17:34.641617] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x619000158b80 (0x619000158b80) 00:06:01.815 passed 00:06:01.815 Test: spdk_bdev_examine_wt ...[2024-07-13 11:17:34.641973] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x619000158b80 (0x619000158b80) 00:06:01.815 passed 00:06:01.815 00:06:01.815 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.815 suites 2 2 n/a 0 0 00:06:01.815 tests 24 24 24 0 0 00:06:01.815 asserts 621 621 621 0 n/a 00:06:01.815 00:06:01.815 Elapsed time = 0.922 seconds 00:06:01.815 00:06:01.815 real 0m3.384s 00:06:01.815 user 0m1.615s 00:06:01.815 sys 0m1.754s 00:06:01.815 11:17:34 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.815 11:17:34 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:01.815 ************************************ 00:06:01.815 END TEST unittest_bdev 00:06:01.815 ************************************ 00:06:01.815 11:17:34 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:01.815 11:17:34 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:01.815 11:17:34 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:01.815 11:17:34 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:01.815 11:17:34 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:01.815 11:17:34 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:01.815 11:17:34 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.815 11:17:34 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.815 11:17:34 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:01.815 ************************************ 00:06:01.815 START TEST unittest_bdev_raid5f 00:06:01.815 ************************************ 00:06:01.815 11:17:34 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:01.815 00:06:01.815 00:06:01.815 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.815 http://cunit.sourceforge.net/ 00:06:01.815 00:06:01.815 00:06:01.815 Suite: raid5f 00:06:01.815 Test: test_raid5f_start ...passed 00:06:01.815 Test: test_raid5f_submit_read_request ...passed 00:06:01.815 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:06.007 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:06:32.542 Test: test_raid5f_chunk_write_error ...passed 00:06:42.512 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:06:46.697 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:07:25.398 Test: test_raid5f_submit_read_request_degraded ...passed 00:07:25.398 00:07:25.398 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.398 suites 1 1 n/a 0 0 00:07:25.398 tests 8 8 8 0 0 00:07:25.398 asserts 518158 518158 518158 0 n/a 00:07:25.398 00:07:25.398 Elapsed time = 83.980 seconds 00:07:25.398 00:07:25.398 real 1m24.135s 00:07:25.398 user 1m19.545s 00:07:25.398 sys 0m4.514s 00:07:25.398 11:18:58 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.398 ************************************ 00:07:25.398 END TEST unittest_bdev_raid5f 00:07:25.398 11:18:58 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:07:25.398 ************************************ 00:07:25.398 11:18:58 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:25.398 11:18:58 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:07:25.398 11:18:58 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.398 11:18:58 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.398 11:18:58 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:25.398 ************************************ 00:07:25.398 START TEST unittest_blob_blobfs 00:07:25.398 ************************************ 00:07:25.398 11:18:58 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:07:25.398 11:18:58 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:07:25.398 11:18:58 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:07:25.398 00:07:25.398 00:07:25.398 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.398 http://cunit.sourceforge.net/ 00:07:25.398 00:07:25.398 00:07:25.398 Suite: blob_nocopy_noextent 00:07:25.398 Test: blob_init ...[2024-07-13 11:18:58.947940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:25.398 passed 00:07:25.398 Test: blob_thin_provision ...passed 00:07:25.398 Test: blob_read_only ...passed 00:07:25.398 Test: bs_load ...[2024-07-13 11:18:59.052456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:25.398 passed 00:07:25.398 Test: bs_load_custom_cluster_size ...passed 00:07:25.398 Test: bs_load_after_failed_grow ...passed 00:07:25.398 Test: bs_cluster_sz ...[2024-07-13 11:18:59.089860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:25.398 [2024-07-13 11:18:59.090398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:25.398 [2024-07-13 11:18:59.090696] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:25.398 passed 00:07:25.398 Test: bs_resize_md ...passed 00:07:25.398 Test: bs_destroy ...passed 00:07:25.398 Test: bs_type ...passed 00:07:25.398 Test: bs_super_block ...passed 00:07:25.398 Test: bs_test_recover_cluster_count ...passed 00:07:25.398 Test: bs_grow_live ...passed 00:07:25.398 Test: bs_grow_live_no_space ...passed 00:07:25.398 Test: bs_test_grow ...passed 00:07:25.398 Test: blob_serialize_test ...passed 00:07:25.398 Test: super_block_crc ...passed 00:07:25.399 Test: blob_thin_prov_write_count_io ...passed 00:07:25.399 Test: blob_thin_prov_unmap_cluster ...passed 00:07:25.399 Test: bs_load_iter_test ...passed 00:07:25.399 Test: blob_relations ...[2024-07-13 11:18:59.329378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.399 [2024-07-13 11:18:59.329661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 [2024-07-13 11:18:59.330831] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.399 [2024-07-13 11:18:59.331077] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 passed 00:07:25.399 Test: blob_relations2 ...[2024-07-13 11:18:59.348860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.399 [2024-07-13 11:18:59.349203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 [2024-07-13 11:18:59.349279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.399 [2024-07-13 11:18:59.349527] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 [2024-07-13 11:18:59.351139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.399 [2024-07-13 11:18:59.351393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 [2024-07-13 11:18:59.351887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:25.399 [2024-07-13 11:18:59.352098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 passed 00:07:25.399 Test: blob_relations3 ...passed 00:07:25.399 Test: blobstore_clean_power_failure ...passed 00:07:25.399 Test: blob_delete_snapshot_power_failure ...[2024-07-13 11:18:59.545350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:25.399 [2024-07-13 11:18:59.560889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:25.399 [2024-07-13 11:18:59.561236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:25.399 [2024-07-13 11:18:59.561331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 [2024-07-13 11:18:59.577479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:25.399 [2024-07-13 11:18:59.577880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:25.399 [2024-07-13 11:18:59.577965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:25.399 [2024-07-13 11:18:59.578124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 [2024-07-13 11:18:59.597534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:25.399 [2024-07-13 11:18:59.597937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 [2024-07-13 11:18:59.618511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:25.399 [2024-07-13 11:18:59.618956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 [2024-07-13 11:18:59.638642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:25.399 [2024-07-13 11:18:59.639020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:25.399 passed 00:07:25.399 Test: blob_create_snapshot_power_failure ...[2024-07-13 11:18:59.702702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:25.399 [2024-07-13 11:18:59.743631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:25.399 [2024-07-13 11:18:59.762953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:25.399 passed 00:07:25.399 Test: blob_io_unit ...passed 00:07:25.399 Test: blob_io_unit_compatibility ...passed 00:07:25.399 Test: blob_ext_md_pages ...passed 00:07:25.399 Test: blob_esnap_io_4096_4096 ...passed 00:07:25.399 Test: blob_esnap_io_512_512 ...passed 00:07:25.399 Test: blob_esnap_io_4096_512 ...passed 00:07:25.399 Test: blob_esnap_io_512_4096 ...passed 00:07:25.399 Test: blob_esnap_clone_resize ...passed 00:07:25.399 Suite: blob_bs_nocopy_noextent 00:07:25.657 Test: blob_open ...passed 00:07:25.657 Test: blob_create ...[2024-07-13 11:19:00.175485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:25.657 passed 00:07:25.657 Test: blob_create_loop ...passed 00:07:25.657 Test: blob_create_fail ...[2024-07-13 11:19:00.284687] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:25.657 passed 00:07:25.657 Test: blob_create_internal ...passed 00:07:25.657 Test: blob_create_zero_extent ...passed 00:07:25.915 Test: blob_snapshot ...passed 00:07:25.915 Test: blob_clone ...passed 00:07:25.915 Test: blob_inflate ...[2024-07-13 11:19:00.494664] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:25.915 passed 00:07:25.915 Test: blob_delete ...passed 00:07:25.915 Test: blob_resize_test ...[2024-07-13 11:19:00.558176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:25.915 passed 00:07:25.915 Test: blob_resize_thin_test ...passed 00:07:25.915 Test: channel_ops ...passed 00:07:26.174 Test: blob_super ...passed 00:07:26.174 Test: blob_rw_verify_iov ...passed 00:07:26.174 Test: blob_unmap ...passed 00:07:26.174 Test: blob_iter ...passed 00:07:26.174 Test: blob_parse_md ...passed 00:07:26.174 Test: bs_load_pending_removal ...passed 00:07:26.174 Test: bs_unload ...[2024-07-13 11:19:00.879386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:26.174 passed 00:07:26.432 Test: bs_usable_clusters ...passed 00:07:26.432 Test: blob_crc ...[2024-07-13 11:19:00.956722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:26.432 [2024-07-13 11:19:00.956887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:26.432 passed 00:07:26.432 Test: blob_flags ...passed 00:07:26.432 Test: bs_version ...passed 00:07:26.432 Test: blob_set_xattrs_test ...[2024-07-13 11:19:01.060384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:26.432 [2024-07-13 11:19:01.060520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:26.432 passed 00:07:26.690 Test: blob_thin_prov_alloc ...passed 00:07:26.690 Test: blob_insert_cluster_msg_test ...passed 00:07:26.690 Test: blob_thin_prov_rw ...passed 00:07:26.690 Test: blob_thin_prov_rle ...passed 00:07:26.690 Test: blob_thin_prov_rw_iov ...passed 00:07:26.690 Test: blob_snapshot_rw ...passed 00:07:26.690 Test: blob_snapshot_rw_iov ...passed 00:07:26.948 Test: blob_inflate_rw ...passed 00:07:26.948 Test: blob_snapshot_freeze_io ...passed 00:07:27.215 Test: blob_operation_split_rw ...passed 00:07:27.215 Test: blob_operation_split_rw_iov ...passed 00:07:27.215 Test: blob_simultaneous_operations ...[2024-07-13 11:19:01.939872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:27.215 [2024-07-13 11:19:01.939986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:27.215 [2024-07-13 11:19:01.941007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:27.215 [2024-07-13 11:19:01.941074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:27.215 [2024-07-13 11:19:01.951007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:27.215 [2024-07-13 11:19:01.951062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:27.215 [2024-07-13 11:19:01.951169] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:27.215 [2024-07-13 11:19:01.951204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:27.481 passed 00:07:27.481 Test: blob_persist_test ...passed 00:07:27.481 Test: blob_decouple_snapshot ...passed 00:07:27.481 Test: blob_seek_io_unit ...passed 00:07:27.481 Test: blob_nested_freezes ...passed 00:07:27.481 Test: blob_clone_resize ...passed 00:07:27.481 Test: blob_shallow_copy ...[2024-07-13 11:19:02.218811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:27.481 [2024-07-13 11:19:02.219208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:27.481 [2024-07-13 11:19:02.219450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:27.739 passed 00:07:27.739 Suite: blob_blob_nocopy_noextent 00:07:27.739 Test: blob_write ...passed 00:07:27.739 Test: blob_read ...passed 00:07:27.739 Test: blob_rw_verify ...passed 00:07:27.739 Test: blob_rw_verify_iov_nomem ...passed 00:07:27.739 Test: blob_rw_iov_read_only ...passed 00:07:27.998 Test: blob_xattr ...passed 00:07:27.998 Test: blob_dirty_shutdown ...passed 00:07:27.998 Test: blob_is_degraded ...passed 00:07:27.998 Suite: blob_esnap_bs_nocopy_noextent 00:07:27.998 Test: blob_esnap_create ...passed 00:07:27.998 Test: blob_esnap_thread_add_remove ...passed 00:07:27.998 Test: blob_esnap_clone_snapshot ...passed 00:07:28.256 Test: blob_esnap_clone_inflate ...passed 00:07:28.256 Test: blob_esnap_clone_decouple ...passed 00:07:28.256 Test: blob_esnap_clone_reload ...passed 00:07:28.256 Test: blob_esnap_hotplug ...passed 00:07:28.256 Test: blob_set_parent ...[2024-07-13 11:19:02.952799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:28.256 [2024-07-13 11:19:02.952928] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:28.256 [2024-07-13 11:19:02.953158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:28.256 [2024-07-13 11:19:02.953236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:28.256 [2024-07-13 11:19:02.953911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:28.256 passed 00:07:28.256 Test: blob_set_external_parent ...[2024-07-13 11:19:02.991195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:28.256 [2024-07-13 11:19:02.991311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:28.256 [2024-07-13 11:19:02.991366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:28.256 [2024-07-13 11:19:02.991945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:28.513 passed 00:07:28.513 Suite: blob_nocopy_extent 00:07:28.513 Test: blob_init ...[2024-07-13 11:19:03.005140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:28.513 passed 00:07:28.513 Test: blob_thin_provision ...passed 00:07:28.513 Test: blob_read_only ...passed 00:07:28.513 Test: bs_load ...[2024-07-13 11:19:03.057905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:28.513 passed 00:07:28.513 Test: bs_load_custom_cluster_size ...passed 00:07:28.513 Test: bs_load_after_failed_grow ...passed 00:07:28.513 Test: bs_cluster_sz ...[2024-07-13 11:19:03.089867] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:28.513 [2024-07-13 11:19:03.090161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:28.513 [2024-07-13 11:19:03.090231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:28.513 passed 00:07:28.513 Test: bs_resize_md ...passed 00:07:28.513 Test: bs_destroy ...passed 00:07:28.513 Test: bs_type ...passed 00:07:28.513 Test: bs_super_block ...passed 00:07:28.513 Test: bs_test_recover_cluster_count ...passed 00:07:28.513 Test: bs_grow_live ...passed 00:07:28.513 Test: bs_grow_live_no_space ...passed 00:07:28.513 Test: bs_test_grow ...passed 00:07:28.514 Test: blob_serialize_test ...passed 00:07:28.514 Test: super_block_crc ...passed 00:07:28.514 Test: blob_thin_prov_write_count_io ...passed 00:07:28.514 Test: blob_thin_prov_unmap_cluster ...passed 00:07:28.772 Test: bs_load_iter_test ...passed 00:07:28.772 Test: blob_relations ...[2024-07-13 11:19:03.270818] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:28.772 [2024-07-13 11:19:03.270943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.772 [2024-07-13 11:19:03.271781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:28.772 [2024-07-13 11:19:03.271837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.772 passed 00:07:28.772 Test: blob_relations2 ...[2024-07-13 11:19:03.284836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:28.772 [2024-07-13 11:19:03.284924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.772 [2024-07-13 11:19:03.284952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:28.772 [2024-07-13 11:19:03.284978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.772 [2024-07-13 11:19:03.286296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:28.772 [2024-07-13 11:19:03.286362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.772 [2024-07-13 11:19:03.286768] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:28.772 [2024-07-13 11:19:03.286824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:28.772 passed 00:07:28.772 Test: blob_relations3 ...passed 00:07:28.772 Test: blobstore_clean_power_failure ...passed 00:07:28.772 Test: blob_delete_snapshot_power_failure ...[2024-07-13 11:19:03.498497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:29.031 [2024-07-13 11:19:03.516969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:29.031 [2024-07-13 11:19:03.535855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:29.031 [2024-07-13 11:19:03.535965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:29.031 [2024-07-13 11:19:03.536022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.031 [2024-07-13 11:19:03.553513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:29.031 [2024-07-13 11:19:03.553645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:29.031 [2024-07-13 11:19:03.553676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:29.031 [2024-07-13 11:19:03.553714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.031 [2024-07-13 11:19:03.570608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:29.031 [2024-07-13 11:19:03.570718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:29.031 [2024-07-13 11:19:03.570759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:29.031 [2024-07-13 11:19:03.570796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.031 [2024-07-13 11:19:03.587361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:29.031 [2024-07-13 11:19:03.587494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.031 [2024-07-13 11:19:03.604291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:29.031 [2024-07-13 11:19:03.604443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.031 [2024-07-13 11:19:03.622135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:29.031 [2024-07-13 11:19:03.622247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:29.031 passed 00:07:29.031 Test: blob_create_snapshot_power_failure ...[2024-07-13 11:19:03.673285] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:29.031 [2024-07-13 11:19:03.690582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:29.031 [2024-07-13 11:19:03.722931] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:29.031 [2024-07-13 11:19:03.740147] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:29.290 passed 00:07:29.290 Test: blob_io_unit ...passed 00:07:29.290 Test: blob_io_unit_compatibility ...passed 00:07:29.290 Test: blob_ext_md_pages ...passed 00:07:29.290 Test: blob_esnap_io_4096_4096 ...passed 00:07:29.290 Test: blob_esnap_io_512_512 ...passed 00:07:29.290 Test: blob_esnap_io_4096_512 ...passed 00:07:29.290 Test: blob_esnap_io_512_4096 ...passed 00:07:29.547 Test: blob_esnap_clone_resize ...passed 00:07:29.547 Suite: blob_bs_nocopy_extent 00:07:29.547 Test: blob_open ...passed 00:07:29.547 Test: blob_create ...[2024-07-13 11:19:04.116536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:29.547 passed 00:07:29.547 Test: blob_create_loop ...passed 00:07:29.547 Test: blob_create_fail ...[2024-07-13 11:19:04.252454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:29.547 passed 00:07:29.806 Test: blob_create_internal ...passed 00:07:29.806 Test: blob_create_zero_extent ...passed 00:07:29.806 Test: blob_snapshot ...passed 00:07:29.806 Test: blob_clone ...passed 00:07:29.806 Test: blob_inflate ...[2024-07-13 11:19:04.513545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:29.806 passed 00:07:30.064 Test: blob_delete ...passed 00:07:30.064 Test: blob_resize_test ...[2024-07-13 11:19:04.619331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:30.064 passed 00:07:30.064 Test: blob_resize_thin_test ...passed 00:07:30.064 Test: channel_ops ...passed 00:07:30.064 Test: blob_super ...passed 00:07:30.321 Test: blob_rw_verify_iov ...passed 00:07:30.321 Test: blob_unmap ...passed 00:07:30.321 Test: blob_iter ...passed 00:07:30.321 Test: blob_parse_md ...passed 00:07:30.321 Test: bs_load_pending_removal ...passed 00:07:30.579 Test: bs_unload ...[2024-07-13 11:19:05.075471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:30.579 passed 00:07:30.579 Test: bs_usable_clusters ...passed 00:07:30.579 Test: blob_crc ...[2024-07-13 11:19:05.176807] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:30.579 [2024-07-13 11:19:05.176979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:30.579 passed 00:07:30.579 Test: blob_flags ...passed 00:07:30.579 Test: bs_version ...passed 00:07:30.837 Test: blob_set_xattrs_test ...[2024-07-13 11:19:05.325551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:30.837 [2024-07-13 11:19:05.325715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:30.837 passed 00:07:30.837 Test: blob_thin_prov_alloc ...passed 00:07:30.837 Test: blob_insert_cluster_msg_test ...passed 00:07:31.095 Test: blob_thin_prov_rw ...passed 00:07:31.095 Test: blob_thin_prov_rle ...passed 00:07:31.095 Test: blob_thin_prov_rw_iov ...passed 00:07:31.095 Test: blob_snapshot_rw ...passed 00:07:31.095 Test: blob_snapshot_rw_iov ...passed 00:07:31.353 Test: blob_inflate_rw ...passed 00:07:31.353 Test: blob_snapshot_freeze_io ...passed 00:07:31.611 Test: blob_operation_split_rw ...passed 00:07:31.611 Test: blob_operation_split_rw_iov ...passed 00:07:31.611 Test: blob_simultaneous_operations ...[2024-07-13 11:19:06.343166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.611 [2024-07-13 11:19:06.343272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.611 [2024-07-13 11:19:06.344406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.611 [2024-07-13 11:19:06.344453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.868 [2024-07-13 11:19:06.354590] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.868 [2024-07-13 11:19:06.354669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.868 [2024-07-13 11:19:06.354826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:31.868 [2024-07-13 11:19:06.354851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:31.868 passed 00:07:31.868 Test: blob_persist_test ...passed 00:07:31.868 Test: blob_decouple_snapshot ...passed 00:07:31.868 Test: blob_seek_io_unit ...passed 00:07:31.868 Test: blob_nested_freezes ...passed 00:07:31.868 Test: blob_clone_resize ...passed 00:07:32.126 Test: blob_shallow_copy ...[2024-07-13 11:19:06.633208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:32.126 [2024-07-13 11:19:06.633500] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:32.126 [2024-07-13 11:19:06.633721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:32.126 passed 00:07:32.126 Suite: blob_blob_nocopy_extent 00:07:32.126 Test: blob_write ...passed 00:07:32.126 Test: blob_read ...passed 00:07:32.126 Test: blob_rw_verify ...passed 00:07:32.126 Test: blob_rw_verify_iov_nomem ...passed 00:07:32.126 Test: blob_rw_iov_read_only ...passed 00:07:32.384 Test: blob_xattr ...passed 00:07:32.384 Test: blob_dirty_shutdown ...passed 00:07:32.384 Test: blob_is_degraded ...passed 00:07:32.384 Suite: blob_esnap_bs_nocopy_extent 00:07:32.384 Test: blob_esnap_create ...passed 00:07:32.384 Test: blob_esnap_thread_add_remove ...passed 00:07:32.384 Test: blob_esnap_clone_snapshot ...passed 00:07:32.384 Test: blob_esnap_clone_inflate ...passed 00:07:32.384 Test: blob_esnap_clone_decouple ...passed 00:07:32.642 Test: blob_esnap_clone_reload ...passed 00:07:32.642 Test: blob_esnap_hotplug ...passed 00:07:32.642 Test: blob_set_parent ...[2024-07-13 11:19:07.217536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:32.642 [2024-07-13 11:19:07.217637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:32.642 [2024-07-13 11:19:07.217758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:32.642 [2024-07-13 11:19:07.217794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:32.642 [2024-07-13 11:19:07.218251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:32.642 passed 00:07:32.642 Test: blob_set_external_parent ...[2024-07-13 11:19:07.255797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:32.642 [2024-07-13 11:19:07.255880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:32.642 [2024-07-13 11:19:07.255904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:32.642 [2024-07-13 11:19:07.256318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:32.642 passed 00:07:32.642 Suite: blob_copy_noextent 00:07:32.642 Test: blob_init ...[2024-07-13 11:19:07.269444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:32.642 passed 00:07:32.642 Test: blob_thin_provision ...passed 00:07:32.642 Test: blob_read_only ...passed 00:07:32.642 Test: bs_load ...[2024-07-13 11:19:07.319912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:32.642 passed 00:07:32.642 Test: bs_load_custom_cluster_size ...passed 00:07:32.642 Test: bs_load_after_failed_grow ...passed 00:07:32.642 Test: bs_cluster_sz ...[2024-07-13 11:19:07.345878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:32.642 [2024-07-13 11:19:07.346101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:32.642 [2024-07-13 11:19:07.346159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:32.642 passed 00:07:32.642 Test: bs_resize_md ...passed 00:07:32.900 Test: bs_destroy ...passed 00:07:32.900 Test: bs_type ...passed 00:07:32.900 Test: bs_super_block ...passed 00:07:32.900 Test: bs_test_recover_cluster_count ...passed 00:07:32.900 Test: bs_grow_live ...passed 00:07:32.900 Test: bs_grow_live_no_space ...passed 00:07:32.900 Test: bs_test_grow ...passed 00:07:32.900 Test: blob_serialize_test ...passed 00:07:32.900 Test: super_block_crc ...passed 00:07:32.900 Test: blob_thin_prov_write_count_io ...passed 00:07:32.900 Test: blob_thin_prov_unmap_cluster ...passed 00:07:32.900 Test: bs_load_iter_test ...passed 00:07:32.900 Test: blob_relations ...[2024-07-13 11:19:07.540430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.900 [2024-07-13 11:19:07.540525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.900 [2024-07-13 11:19:07.541060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.900 [2024-07-13 11:19:07.541101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.900 passed 00:07:32.900 Test: blob_relations2 ...[2024-07-13 11:19:07.555141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.900 [2024-07-13 11:19:07.555221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.900 [2024-07-13 11:19:07.555251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.900 [2024-07-13 11:19:07.555266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.900 [2024-07-13 11:19:07.556141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.900 [2024-07-13 11:19:07.556187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.900 [2024-07-13 11:19:07.556475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:32.900 [2024-07-13 11:19:07.556509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:32.900 passed 00:07:32.900 Test: blob_relations3 ...passed 00:07:33.159 Test: blobstore_clean_power_failure ...passed 00:07:33.159 Test: blob_delete_snapshot_power_failure ...[2024-07-13 11:19:07.724544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:33.159 [2024-07-13 11:19:07.736368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:33.159 [2024-07-13 11:19:07.736452] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:33.159 [2024-07-13 11:19:07.736476] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.159 [2024-07-13 11:19:07.748813] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:33.159 [2024-07-13 11:19:07.748886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:33.159 [2024-07-13 11:19:07.748909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:33.159 [2024-07-13 11:19:07.748938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.159 [2024-07-13 11:19:07.763163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:33.159 [2024-07-13 11:19:07.763285] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.159 [2024-07-13 11:19:07.777600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:33.159 [2024-07-13 11:19:07.777709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.159 [2024-07-13 11:19:07.791712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:33.159 [2024-07-13 11:19:07.791798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:33.159 passed 00:07:33.159 Test: blob_create_snapshot_power_failure ...[2024-07-13 11:19:07.833574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:33.159 [2024-07-13 11:19:07.860295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:07:33.159 [2024-07-13 11:19:07.874496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:33.443 passed 00:07:33.443 Test: blob_io_unit ...passed 00:07:33.443 Test: blob_io_unit_compatibility ...passed 00:07:33.443 Test: blob_ext_md_pages ...passed 00:07:33.443 Test: blob_esnap_io_4096_4096 ...passed 00:07:33.443 Test: blob_esnap_io_512_512 ...passed 00:07:33.443 Test: blob_esnap_io_4096_512 ...passed 00:07:33.443 Test: blob_esnap_io_512_4096 ...passed 00:07:33.443 Test: blob_esnap_clone_resize ...passed 00:07:33.443 Suite: blob_bs_copy_noextent 00:07:33.443 Test: blob_open ...passed 00:07:33.443 Test: blob_create ...[2024-07-13 11:19:08.137897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:33.443 passed 00:07:33.701 Test: blob_create_loop ...passed 00:07:33.701 Test: blob_create_fail ...[2024-07-13 11:19:08.229572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:33.701 passed 00:07:33.701 Test: blob_create_internal ...passed 00:07:33.701 Test: blob_create_zero_extent ...passed 00:07:33.701 Test: blob_snapshot ...passed 00:07:33.701 Test: blob_clone ...passed 00:07:33.701 Test: blob_inflate ...[2024-07-13 11:19:08.399854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:33.701 passed 00:07:33.701 Test: blob_delete ...passed 00:07:33.958 Test: blob_resize_test ...[2024-07-13 11:19:08.465685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:33.958 passed 00:07:33.958 Test: blob_resize_thin_test ...passed 00:07:33.958 Test: channel_ops ...passed 00:07:33.958 Test: blob_super ...passed 00:07:33.958 Test: blob_rw_verify_iov ...passed 00:07:33.958 Test: blob_unmap ...passed 00:07:33.958 Test: blob_iter ...passed 00:07:34.216 Test: blob_parse_md ...passed 00:07:34.216 Test: bs_load_pending_removal ...passed 00:07:34.216 Test: bs_unload ...[2024-07-13 11:19:08.782630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:34.216 passed 00:07:34.216 Test: bs_usable_clusters ...passed 00:07:34.216 Test: blob_crc ...[2024-07-13 11:19:08.860136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:34.216 [2024-07-13 11:19:08.860233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:34.216 passed 00:07:34.216 Test: blob_flags ...passed 00:07:34.216 Test: bs_version ...passed 00:07:34.473 Test: blob_set_xattrs_test ...[2024-07-13 11:19:08.969305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:34.473 [2024-07-13 11:19:08.969428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:34.473 passed 00:07:34.473 Test: blob_thin_prov_alloc ...passed 00:07:34.473 Test: blob_insert_cluster_msg_test ...passed 00:07:34.473 Test: blob_thin_prov_rw ...passed 00:07:34.730 Test: blob_thin_prov_rle ...passed 00:07:34.730 Test: blob_thin_prov_rw_iov ...passed 00:07:34.730 Test: blob_snapshot_rw ...passed 00:07:34.730 Test: blob_snapshot_rw_iov ...passed 00:07:34.987 Test: blob_inflate_rw ...passed 00:07:34.987 Test: blob_snapshot_freeze_io ...passed 00:07:34.987 Test: blob_operation_split_rw ...passed 00:07:35.244 Test: blob_operation_split_rw_iov ...passed 00:07:35.244 Test: blob_simultaneous_operations ...[2024-07-13 11:19:09.861656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:35.244 [2024-07-13 11:19:09.861748] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:35.244 [2024-07-13 11:19:09.862152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:35.244 [2024-07-13 11:19:09.862209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:35.244 [2024-07-13 11:19:09.864672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:35.244 [2024-07-13 11:19:09.864719] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:35.244 [2024-07-13 11:19:09.864809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:35.244 [2024-07-13 11:19:09.864829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:35.244 passed 00:07:35.244 Test: blob_persist_test ...passed 00:07:35.244 Test: blob_decouple_snapshot ...passed 00:07:35.502 Test: blob_seek_io_unit ...passed 00:07:35.502 Test: blob_nested_freezes ...passed 00:07:35.502 Test: blob_clone_resize ...passed 00:07:35.502 Test: blob_shallow_copy ...[2024-07-13 11:19:10.107873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:35.502 [2024-07-13 11:19:10.108139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:35.502 [2024-07-13 11:19:10.108357] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:35.502 passed 00:07:35.502 Suite: blob_blob_copy_noextent 00:07:35.502 Test: blob_write ...passed 00:07:35.502 Test: blob_read ...passed 00:07:35.502 Test: blob_rw_verify ...passed 00:07:35.760 Test: blob_rw_verify_iov_nomem ...passed 00:07:35.760 Test: blob_rw_iov_read_only ...passed 00:07:35.760 Test: blob_xattr ...passed 00:07:35.760 Test: blob_dirty_shutdown ...passed 00:07:35.760 Test: blob_is_degraded ...passed 00:07:35.760 Suite: blob_esnap_bs_copy_noextent 00:07:35.760 Test: blob_esnap_create ...passed 00:07:35.760 Test: blob_esnap_thread_add_remove ...passed 00:07:36.018 Test: blob_esnap_clone_snapshot ...passed 00:07:36.018 Test: blob_esnap_clone_inflate ...passed 00:07:36.018 Test: blob_esnap_clone_decouple ...passed 00:07:36.018 Test: blob_esnap_clone_reload ...passed 00:07:36.018 Test: blob_esnap_hotplug ...passed 00:07:36.018 Test: blob_set_parent ...[2024-07-13 11:19:10.754019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:36.018 [2024-07-13 11:19:10.754127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:36.018 [2024-07-13 11:19:10.754262] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:36.018 [2024-07-13 11:19:10.754310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:36.018 [2024-07-13 11:19:10.754834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:36.276 passed 00:07:36.276 Test: blob_set_external_parent ...[2024-07-13 11:19:10.803007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:36.276 [2024-07-13 11:19:10.803128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:36.276 [2024-07-13 11:19:10.803189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:36.276 [2024-07-13 11:19:10.803560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:36.276 passed 00:07:36.276 Suite: blob_copy_extent 00:07:36.276 Test: blob_init ...[2024-07-13 11:19:10.819143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:07:36.276 passed 00:07:36.276 Test: blob_thin_provision ...passed 00:07:36.276 Test: blob_read_only ...passed 00:07:36.276 Test: bs_load ...[2024-07-13 11:19:10.881140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:07:36.276 passed 00:07:36.276 Test: bs_load_custom_cluster_size ...passed 00:07:36.276 Test: bs_load_after_failed_grow ...passed 00:07:36.276 Test: bs_cluster_sz ...[2024-07-13 11:19:10.914055] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:07:36.276 [2024-07-13 11:19:10.914284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:07:36.276 [2024-07-13 11:19:10.914329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:07:36.276 passed 00:07:36.276 Test: bs_resize_md ...passed 00:07:36.276 Test: bs_destroy ...passed 00:07:36.276 Test: bs_type ...passed 00:07:36.276 Test: bs_super_block ...passed 00:07:36.276 Test: bs_test_recover_cluster_count ...passed 00:07:36.276 Test: bs_grow_live ...passed 00:07:36.276 Test: bs_grow_live_no_space ...passed 00:07:36.535 Test: bs_test_grow ...passed 00:07:36.535 Test: blob_serialize_test ...passed 00:07:36.535 Test: super_block_crc ...passed 00:07:36.535 Test: blob_thin_prov_write_count_io ...passed 00:07:36.535 Test: blob_thin_prov_unmap_cluster ...passed 00:07:36.535 Test: bs_load_iter_test ...passed 00:07:36.535 Test: blob_relations ...[2024-07-13 11:19:11.138117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:36.535 [2024-07-13 11:19:11.138253] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.535 [2024-07-13 11:19:11.139017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:36.535 [2024-07-13 11:19:11.139090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.535 passed 00:07:36.535 Test: blob_relations2 ...[2024-07-13 11:19:11.156630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:36.535 [2024-07-13 11:19:11.156709] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.535 [2024-07-13 11:19:11.156761] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:36.535 [2024-07-13 11:19:11.156781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.535 [2024-07-13 11:19:11.157791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:36.535 [2024-07-13 11:19:11.157837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.535 [2024-07-13 11:19:11.158175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:07:36.535 [2024-07-13 11:19:11.158214] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.535 passed 00:07:36.535 Test: blob_relations3 ...passed 00:07:36.794 Test: blobstore_clean_power_failure ...passed 00:07:36.794 Test: blob_delete_snapshot_power_failure ...[2024-07-13 11:19:11.377899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:36.794 [2024-07-13 11:19:11.395880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:36.794 [2024-07-13 11:19:11.412777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:36.794 [2024-07-13 11:19:11.412877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:36.794 [2024-07-13 11:19:11.412923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.794 [2024-07-13 11:19:11.429373] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:36.794 [2024-07-13 11:19:11.429468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:36.794 [2024-07-13 11:19:11.429510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:36.794 [2024-07-13 11:19:11.429539] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.794 [2024-07-13 11:19:11.445996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:36.794 [2024-07-13 11:19:11.449481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:07:36.794 [2024-07-13 11:19:11.449538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:07:36.794 [2024-07-13 11:19:11.449570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.794 [2024-07-13 11:19:11.467029] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:07:36.794 [2024-07-13 11:19:11.467175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.794 [2024-07-13 11:19:11.484000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:07:36.794 [2024-07-13 11:19:11.484144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.794 [2024-07-13 11:19:11.500998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:07:36.794 [2024-07-13 11:19:11.501117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:36.794 passed 00:07:37.053 Test: blob_create_snapshot_power_failure ...[2024-07-13 11:19:11.550945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:07:37.053 [2024-07-13 11:19:11.567162] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:07:37.053 [2024-07-13 11:19:11.598999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:07:37.053 [2024-07-13 11:19:11.615193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:07:37.053 passed 00:07:37.053 Test: blob_io_unit ...passed 00:07:37.053 Test: blob_io_unit_compatibility ...passed 00:07:37.053 Test: blob_ext_md_pages ...passed 00:07:37.053 Test: blob_esnap_io_4096_4096 ...passed 00:07:37.053 Test: blob_esnap_io_512_512 ...passed 00:07:37.311 Test: blob_esnap_io_4096_512 ...passed 00:07:37.311 Test: blob_esnap_io_512_4096 ...passed 00:07:37.311 Test: blob_esnap_clone_resize ...passed 00:07:37.311 Suite: blob_bs_copy_extent 00:07:37.311 Test: blob_open ...passed 00:07:37.311 Test: blob_create ...[2024-07-13 11:19:11.969449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:07:37.311 passed 00:07:37.569 Test: blob_create_loop ...passed 00:07:37.569 Test: blob_create_fail ...[2024-07-13 11:19:12.098572] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:37.569 passed 00:07:37.569 Test: blob_create_internal ...passed 00:07:37.569 Test: blob_create_zero_extent ...passed 00:07:37.569 Test: blob_snapshot ...passed 00:07:37.569 Test: blob_clone ...passed 00:07:37.827 Test: blob_inflate ...[2024-07-13 11:19:12.334870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:07:37.827 passed 00:07:37.827 Test: blob_delete ...passed 00:07:37.827 Test: blob_resize_test ...[2024-07-13 11:19:12.418328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:07:37.827 passed 00:07:37.827 Test: blob_resize_thin_test ...passed 00:07:37.827 Test: channel_ops ...passed 00:07:38.086 Test: blob_super ...passed 00:07:38.086 Test: blob_rw_verify_iov ...passed 00:07:38.086 Test: blob_unmap ...passed 00:07:38.086 Test: blob_iter ...passed 00:07:38.086 Test: blob_parse_md ...passed 00:07:38.086 Test: bs_load_pending_removal ...passed 00:07:38.343 Test: bs_unload ...[2024-07-13 11:19:12.853866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:07:38.343 passed 00:07:38.343 Test: bs_usable_clusters ...passed 00:07:38.343 Test: blob_crc ...[2024-07-13 11:19:12.943496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:38.343 [2024-07-13 11:19:12.943657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:07:38.343 passed 00:07:38.343 Test: blob_flags ...passed 00:07:38.343 Test: bs_version ...passed 00:07:38.343 Test: blob_set_xattrs_test ...[2024-07-13 11:19:13.080765] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:38.343 [2024-07-13 11:19:13.080913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:07:38.601 passed 00:07:38.601 Test: blob_thin_prov_alloc ...passed 00:07:38.601 Test: blob_insert_cluster_msg_test ...passed 00:07:38.601 Test: blob_thin_prov_rw ...passed 00:07:38.859 Test: blob_thin_prov_rle ...passed 00:07:38.859 Test: blob_thin_prov_rw_iov ...passed 00:07:38.859 Test: blob_snapshot_rw ...passed 00:07:38.859 Test: blob_snapshot_rw_iov ...passed 00:07:39.117 Test: blob_inflate_rw ...passed 00:07:39.117 Test: blob_snapshot_freeze_io ...passed 00:07:39.374 Test: blob_operation_split_rw ...passed 00:07:39.375 Test: blob_operation_split_rw_iov ...passed 00:07:39.375 Test: blob_simultaneous_operations ...[2024-07-13 11:19:14.082981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:39.375 [2024-07-13 11:19:14.083122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.375 [2024-07-13 11:19:14.083669] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:39.375 [2024-07-13 11:19:14.083714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.375 [2024-07-13 11:19:14.086434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:39.375 [2024-07-13 11:19:14.086498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.375 [2024-07-13 11:19:14.086610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:07:39.375 [2024-07-13 11:19:14.086633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:07:39.375 passed 00:07:39.633 Test: blob_persist_test ...passed 00:07:39.633 Test: blob_decouple_snapshot ...passed 00:07:39.633 Test: blob_seek_io_unit ...passed 00:07:39.633 Test: blob_nested_freezes ...passed 00:07:39.633 Test: blob_clone_resize ...passed 00:07:39.891 Test: blob_shallow_copy ...[2024-07-13 11:19:14.399284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:07:39.891 [2024-07-13 11:19:14.399628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:07:39.892 [2024-07-13 11:19:14.399885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:07:39.892 passed 00:07:39.892 Suite: blob_blob_copy_extent 00:07:39.892 Test: blob_write ...passed 00:07:39.892 Test: blob_read ...passed 00:07:39.892 Test: blob_rw_verify ...passed 00:07:39.892 Test: blob_rw_verify_iov_nomem ...passed 00:07:40.150 Test: blob_rw_iov_read_only ...passed 00:07:40.150 Test: blob_xattr ...passed 00:07:40.150 Test: blob_dirty_shutdown ...passed 00:07:40.150 Test: blob_is_degraded ...passed 00:07:40.150 Suite: blob_esnap_bs_copy_extent 00:07:40.150 Test: blob_esnap_create ...passed 00:07:40.150 Test: blob_esnap_thread_add_remove ...passed 00:07:40.408 Test: blob_esnap_clone_snapshot ...passed 00:07:40.408 Test: blob_esnap_clone_inflate ...passed 00:07:40.408 Test: blob_esnap_clone_decouple ...passed 00:07:40.408 Test: blob_esnap_clone_reload ...passed 00:07:40.408 Test: blob_esnap_hotplug ...passed 00:07:40.408 Test: blob_set_parent ...[2024-07-13 11:19:15.145365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:07:40.408 [2024-07-13 11:19:15.145490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:07:40.408 [2024-07-13 11:19:15.145654] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:07:40.408 [2024-07-13 11:19:15.145717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:07:40.408 [2024-07-13 11:19:15.146339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:40.667 passed 00:07:40.667 Test: blob_set_external_parent ...[2024-07-13 11:19:15.196232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:07:40.667 [2024-07-13 11:19:15.196400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:07:40.667 [2024-07-13 11:19:15.196450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:07:40.667 [2024-07-13 11:19:15.196944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:07:40.667 passed 00:07:40.667 00:07:40.667 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.667 suites 16 16 n/a 0 0 00:07:40.667 tests 376 376 376 0 0 00:07:40.667 asserts 143965 143965 143965 0 n/a 00:07:40.667 00:07:40.667 Elapsed time = 16.243 seconds 00:07:40.667 11:19:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:07:40.667 00:07:40.667 00:07:40.667 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.667 http://cunit.sourceforge.net/ 00:07:40.667 00:07:40.667 00:07:40.667 Suite: blob_bdev 00:07:40.667 Test: create_bs_dev ...passed 00:07:40.667 Test: create_bs_dev_ro ...[2024-07-13 11:19:15.321663] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:07:40.667 passed 00:07:40.667 Test: create_bs_dev_rw ...passed 00:07:40.667 Test: claim_bs_dev ...[2024-07-13 11:19:15.322183] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:07:40.667 passed 00:07:40.667 Test: claim_bs_dev_ro ...passed 00:07:40.667 Test: deferred_destroy_refs ...passed 00:07:40.667 Test: deferred_destroy_channels ...passed 00:07:40.667 Test: deferred_destroy_threads ...passed 00:07:40.667 00:07:40.667 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.667 suites 1 1 n/a 0 0 00:07:40.667 tests 8 8 8 0 0 00:07:40.667 asserts 119 119 119 0 n/a 00:07:40.667 00:07:40.667 Elapsed time = 0.001 seconds 00:07:40.667 11:19:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:07:40.667 00:07:40.667 00:07:40.667 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.667 http://cunit.sourceforge.net/ 00:07:40.667 00:07:40.667 00:07:40.667 Suite: tree 00:07:40.667 Test: blobfs_tree_op_test ...passed 00:07:40.667 00:07:40.667 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.667 suites 1 1 n/a 0 0 00:07:40.667 tests 1 1 1 0 0 00:07:40.667 asserts 27 27 27 0 n/a 00:07:40.667 00:07:40.667 Elapsed time = 0.000 seconds 00:07:40.667 11:19:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:07:40.667 00:07:40.667 00:07:40.667 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.667 http://cunit.sourceforge.net/ 00:07:40.668 00:07:40.668 00:07:40.668 Suite: blobfs_async_ut 00:07:40.926 Test: fs_init ...passed 00:07:40.926 Test: fs_open ...passed 00:07:40.926 Test: fs_create ...passed 00:07:40.926 Test: fs_truncate ...passed 00:07:40.926 Test: fs_rename ...[2024-07-13 11:19:15.526548] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:07:40.926 passed 00:07:40.926 Test: fs_rw_async ...passed 00:07:40.926 Test: fs_writev_readv_async ...passed 00:07:40.926 Test: tree_find_buffer_ut ...passed 00:07:40.926 Test: channel_ops ...passed 00:07:40.926 Test: channel_ops_sync ...passed 00:07:40.926 00:07:40.926 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.926 suites 1 1 n/a 0 0 00:07:40.926 tests 10 10 10 0 0 00:07:40.926 asserts 292 292 292 0 n/a 00:07:40.926 00:07:40.926 Elapsed time = 0.187 seconds 00:07:40.926 11:19:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:07:40.926 00:07:40.926 00:07:40.926 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.926 http://cunit.sourceforge.net/ 00:07:40.926 00:07:40.926 00:07:40.926 Suite: blobfs_sync_ut 00:07:41.185 Test: cache_read_after_write ...[2024-07-13 11:19:15.707119] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:07:41.185 passed 00:07:41.185 Test: file_length ...passed 00:07:41.185 Test: append_write_to_extend_blob ...passed 00:07:41.185 Test: partial_buffer ...passed 00:07:41.185 Test: cache_write_null_buffer ...passed 00:07:41.185 Test: fs_create_sync ...passed 00:07:41.185 Test: fs_rename_sync ...passed 00:07:41.185 Test: cache_append_no_cache ...passed 00:07:41.185 Test: fs_delete_file_without_close ...passed 00:07:41.185 00:07:41.185 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.185 suites 1 1 n/a 0 0 00:07:41.185 tests 9 9 9 0 0 00:07:41.185 asserts 345 345 345 0 n/a 00:07:41.185 00:07:41.185 Elapsed time = 0.454 seconds 00:07:41.185 11:19:15 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:07:41.444 00:07:41.444 00:07:41.444 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.444 http://cunit.sourceforge.net/ 00:07:41.444 00:07:41.444 00:07:41.444 Suite: blobfs_bdev_ut 00:07:41.444 Test: spdk_blobfs_bdev_detect_test ...[2024-07-13 11:19:15.932768] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:41.444 passed 00:07:41.444 Test: spdk_blobfs_bdev_create_test ...[2024-07-13 11:19:15.933146] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:07:41.444 passed 00:07:41.444 Test: spdk_blobfs_bdev_mount_test ...passed 00:07:41.444 00:07:41.444 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.444 suites 1 1 n/a 0 0 00:07:41.444 tests 3 3 3 0 0 00:07:41.444 asserts 9 9 9 0 n/a 00:07:41.444 00:07:41.444 Elapsed time = 0.001 seconds 00:07:41.444 00:07:41.444 real 0m17.034s 00:07:41.444 user 0m16.447s 00:07:41.444 sys 0m0.805s 00:07:41.444 11:19:15 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.444 11:19:15 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.444 ************************************ 00:07:41.444 END TEST unittest_blob_blobfs 00:07:41.444 ************************************ 00:07:41.444 11:19:15 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:41.444 11:19:15 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:07:41.444 11:19:15 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.444 11:19:15 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.444 11:19:15 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:41.444 ************************************ 00:07:41.444 START TEST unittest_event 00:07:41.444 ************************************ 00:07:41.444 11:19:16 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:07:41.444 11:19:16 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:07:41.444 00:07:41.444 00:07:41.444 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.444 http://cunit.sourceforge.net/ 00:07:41.444 00:07:41.444 00:07:41.444 Suite: app_suite 00:07:41.444 Test: test_spdk_app_parse_args ...app_ut [options] 00:07:41.444 00:07:41.444 CPU options: 00:07:41.444 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:41.444 (like [0,1,10]) 00:07:41.444 --lcores lcore to CPU mapping list. The list is in the format: 00:07:41.444 [<,lcores[@CPUs]>...] 00:07:41.444 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:41.444 Within the group, '-' is used for range separator, 00:07:41.444 ',' is used for single number separator. 00:07:41.444 '( )' can be omitted for single element group, 00:07:41.444 '@' can be omitted if cpus and lcores have the same value 00:07:41.444 --disable-cpumask-locks Disable CPU core lock files. 00:07:41.444 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:41.444 pollers in the app support interrupt mode) 00:07:41.444 -p, --main-core main (primary) core for DPDK 00:07:41.444 00:07:41.444 Configuration options: 00:07:41.444 -c, --config, --json JSON config file 00:07:41.444 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:41.444 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:41.444 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:41.444 --rpcs-allowed comma-separated list of permitted RPCS 00:07:41.444 --json-ignore-init-errors don't exit on invalid config entry 00:07:41.444 00:07:41.444 Memory options: 00:07:41.444 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:41.444 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000)app_ut: invalid option -- 'z' 00:07:41.444 00:07:41.444 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:41.444 -R, --huge-unlink unlink huge files after initialization 00:07:41.444 -n, --mem-channels number of memory channels used for DPDK 00:07:41.444 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:41.444 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:41.444 --no-huge run without using hugepages 00:07:41.445 -i, --shm-id shared memory ID (optional) 00:07:41.445 -g, --single-file-segments force creating just one hugetlbfs file 00:07:41.445 00:07:41.445 PCI options: 00:07:41.445 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:41.445 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:41.445 -u, --no-pci disable PCI access 00:07:41.445 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:41.445 00:07:41.445 Log options: 00:07:41.445 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:41.445 --silence-noticelog disable notice level logging to stderr 00:07:41.445 00:07:41.445 Trace options: 00:07:41.445 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:41.445 setting 0 to disable trace (default 32768) 00:07:41.445 Tracepoints vary in size and can use more than one trace entry. 00:07:41.445 -e, --tpoint-group [:] 00:07:41.445 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:41.445 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:41.445 a tracepoint group. First tpoint inside a group can be enabled by 00:07:41.445 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:41.445 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:41.445 in /include/spdk_internal/trace_defs.h 00:07:41.445 00:07:41.445 Other options: 00:07:41.445 -h, --help show this usage 00:07:41.445 -v, --version print SPDK version 00:07:41.445 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:41.445 --env-context Opaque context for use of the env implementation 00:07:41.445 app_ut [options] 00:07:41.445 00:07:41.445 CPU options: 00:07:41.445 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:41.445 (like [0,1,10]) 00:07:41.445 --lcores lcore to CPU mapping list. The list is in the format: 00:07:41.445 [<,lcores[@CPUs]>...] 00:07:41.445 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:41.445 Within the group, '-' is used for range separator, 00:07:41.445 ',' is used for single number separator. 00:07:41.445 '( )' can be omitted for single element group, 00:07:41.445 '@' can be omitted if cpus and lcores have the same value 00:07:41.445 --disable-cpumask-locks Disable CPU core lock files. 00:07:41.445 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:41.445 pollers in the app support interrupt mode) 00:07:41.445 -p, --main-core main (primary) core for DPDK 00:07:41.445 00:07:41.445 Configuration options: 00:07:41.445 -c, --config, --json JSON config file 00:07:41.445 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:41.445 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:41.445 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:41.445 --rpcs-allowed comma-separated list of permitted RPCS 00:07:41.445 --json-ignore-init-errors don't exit on invalid config entry 00:07:41.445 00:07:41.445 Memory options: 00:07:41.445 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:41.445 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:41.445 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:41.445 -R, --huge-unlink unlink huge files after initialization 00:07:41.445 -n, --mem-channels number of memory channels used for DPDK 00:07:41.445 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:41.445 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:41.445 --no-huge run without using hugepages 00:07:41.445 -i, --shm-id shared memory ID (optional) 00:07:41.445 -g, --single-file-segments force creating just one hugetlbfs file 00:07:41.445 00:07:41.445 PCI options: 00:07:41.445 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:41.445 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:41.445 -u, --no-pci disable PCI access 00:07:41.445 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:41.445 00:07:41.445 Log options: 00:07:41.445 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:41.445 --silence-noticelog disable notice level logging to stderr 00:07:41.445 00:07:41.445 Trace options: 00:07:41.445 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:41.445 setting 0 to disable trace (default 32768) 00:07:41.445 Tracepoints vary in size and can use more than one trace entry. 00:07:41.445 -e, --tpoint-group [:] 00:07:41.445 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:41.445 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:41.445 a tracepoint group. First tpoint inside a group can be enabled by 00:07:41.445 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:41.445 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:41.445 in /include/spdk_internal/trace_defs.h 00:07:41.445 00:07:41.445 Other options: 00:07:41.445 -h, --help show this usage 00:07:41.445 -v, --version print SPDK version 00:07:41.445 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:41.445 --env-context Opaque context for use of the env implementation 00:07:41.445 app_ut: unrecognized option '--test-long-opt' 00:07:41.445 [2024-07-13 11:19:16.020028] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1191:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:07:41.445 [2024-07-13 11:19:16.020290] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1372:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:07:41.445 app_ut [options] 00:07:41.445 00:07:41.445 CPU options: 00:07:41.445 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:41.445 (like [0,1,10]) 00:07:41.445 --lcores lcore to CPU mapping list. The list is in the format: 00:07:41.445 [<,lcores[@CPUs]>...] 00:07:41.445 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:41.445 Within the group, '-' is used for range separator, 00:07:41.445 ',' is used for single number separator. 00:07:41.445 '( )' can be omitted for single element group, 00:07:41.445 '@' can be omitted if cpus and lcores have the same value 00:07:41.445 --disable-cpumask-locks Disable CPU core lock files. 00:07:41.445 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:41.445 pollers in the app support interrupt mode) 00:07:41.445 -p, --main-core main (primary) core for DPDK 00:07:41.445 00:07:41.445 Configuration options: 00:07:41.445 -c, --config, --json JSON config file 00:07:41.445 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:41.445 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:41.445 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:41.445 --rpcs-allowed comma-separated list of permitted RPCS 00:07:41.445 --json-ignore-init-errors don't exit on invalid config entry 00:07:41.445 00:07:41.445 Memory options: 00:07:41.445 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:41.445 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:41.445 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:41.445 -R, --huge-unlink unlink huge files after initialization 00:07:41.445 -n, --mem-channels number of memory channels used for DPDK 00:07:41.445 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:41.445 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:41.445 --no-huge run without using hugepages 00:07:41.445 -i, --shm-id shared memory ID (optional) 00:07:41.445 -g, --single-file-segments force creating just one hugetlbfs file 00:07:41.445 00:07:41.445 PCI options: 00:07:41.445 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:41.445 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:41.445 -u, --no-pci disable PCI access 00:07:41.445 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:41.445 00:07:41.445 Log options: 00:07:41.445 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:07:41.445 --silence-noticelog disable notice level logging to stderr 00:07:41.445 00:07:41.445 Trace options: 00:07:41.445 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:41.445 setting 0 to disable trace (default 32768) 00:07:41.445 Tracepoints vary in size and can use more than one trace entry. 00:07:41.445 -e, --tpoint-group [:] 00:07:41.445 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:07:41.445 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:41.445 a tracepoint group. First tpoint inside a group can be enabled by 00:07:41.445 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:41.445 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:41.445 in /include/spdk_internal/trace_defs.h 00:07:41.445 00:07:41.445 Other options: 00:07:41.445 -h, --help show this usage 00:07:41.445 -v, --version print SPDK version 00:07:41.445 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:41.445 --env-context Opaque context for use of the env implementation 00:07:41.445 passed 00:07:41.445 00:07:41.445 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.445 suites 1 1 n/a 0 0 00:07:41.445 tests 1 1 1 0 0 00:07:41.445 asserts 8 8 8 0 n/a 00:07:41.445 00:07:41.445 Elapsed time = 0.001 seconds 00:07:41.445 [2024-07-13 11:19:16.020499] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1277:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:07:41.446 11:19:16 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:07:41.446 00:07:41.446 00:07:41.446 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.446 http://cunit.sourceforge.net/ 00:07:41.446 00:07:41.446 00:07:41.446 Suite: app_suite 00:07:41.446 Test: test_create_reactor ...passed 00:07:41.446 Test: test_init_reactors ...passed 00:07:41.446 Test: test_event_call ...passed 00:07:41.446 Test: test_schedule_thread ...passed 00:07:41.446 Test: test_reschedule_thread ...passed 00:07:41.446 Test: test_bind_thread ...passed 00:07:41.446 Test: test_for_each_reactor ...passed 00:07:41.446 Test: test_reactor_stats ...passed 00:07:41.446 Test: test_scheduler ...passed 00:07:41.446 Test: test_governor ...passed 00:07:41.446 00:07:41.446 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.446 suites 1 1 n/a 0 0 00:07:41.446 tests 10 10 10 0 0 00:07:41.446 asserts 344 344 344 0 n/a 00:07:41.446 00:07:41.446 Elapsed time = 0.014 seconds 00:07:41.446 00:07:41.446 real 0m0.085s 00:07:41.446 user 0m0.039s 00:07:41.446 sys 0m0.046s 00:07:41.446 11:19:16 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.446 11:19:16 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:07:41.446 ************************************ 00:07:41.446 END TEST unittest_event 00:07:41.446 ************************************ 00:07:41.446 11:19:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:41.446 11:19:16 unittest -- unit/unittest.sh@235 -- # uname -s 00:07:41.446 11:19:16 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:07:41.446 11:19:16 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:07:41.446 11:19:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:41.446 11:19:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.446 11:19:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:41.446 ************************************ 00:07:41.446 START TEST unittest_ftl 00:07:41.446 ************************************ 00:07:41.446 11:19:16 unittest.unittest_ftl -- common/autotest_common.sh@1123 -- # unittest_ftl 00:07:41.446 11:19:16 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:07:41.446 00:07:41.446 00:07:41.446 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.446 http://cunit.sourceforge.net/ 00:07:41.446 00:07:41.446 00:07:41.446 Suite: ftl_band_suite 00:07:41.704 Test: test_band_block_offset_from_addr_base ...passed 00:07:41.704 Test: test_band_block_offset_from_addr_offset ...passed 00:07:41.704 Test: test_band_addr_from_block_offset ...passed 00:07:41.704 Test: test_band_set_addr ...passed 00:07:41.704 Test: test_invalidate_addr ...passed 00:07:41.704 Test: test_next_xfer_addr ...passed 00:07:41.704 00:07:41.704 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.704 suites 1 1 n/a 0 0 00:07:41.704 tests 6 6 6 0 0 00:07:41.704 asserts 30356 30356 30356 0 n/a 00:07:41.704 00:07:41.704 Elapsed time = 0.179 seconds 00:07:41.704 11:19:16 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:07:41.704 00:07:41.704 00:07:41.704 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.704 http://cunit.sourceforge.net/ 00:07:41.704 00:07:41.704 00:07:41.704 Suite: ftl_bitmap 00:07:41.704 Test: test_ftl_bitmap_create ...[2024-07-13 11:19:16.420542] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:07:41.704 passed 00:07:41.704 Test: test_ftl_bitmap_get ...[2024-07-13 11:19:16.420764] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:07:41.704 passed 00:07:41.704 Test: test_ftl_bitmap_set ...passed 00:07:41.704 Test: test_ftl_bitmap_clear ...passed 00:07:41.704 Test: test_ftl_bitmap_find_first_set ...passed 00:07:41.704 Test: test_ftl_bitmap_find_first_clear ...passed 00:07:41.704 Test: test_ftl_bitmap_count_set ...passed 00:07:41.704 00:07:41.704 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.704 suites 1 1 n/a 0 0 00:07:41.704 tests 7 7 7 0 0 00:07:41.704 asserts 137 137 137 0 n/a 00:07:41.704 00:07:41.704 Elapsed time = 0.001 seconds 00:07:41.704 11:19:16 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:07:41.963 00:07:41.963 00:07:41.963 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.963 http://cunit.sourceforge.net/ 00:07:41.963 00:07:41.963 00:07:41.963 Suite: ftl_io_suite 00:07:41.963 Test: test_completion ...passed 00:07:41.963 Test: test_multiple_ios ...passed 00:07:41.963 00:07:41.963 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.963 suites 1 1 n/a 0 0 00:07:41.963 tests 2 2 2 0 0 00:07:41.963 asserts 47 47 47 0 n/a 00:07:41.963 00:07:41.963 Elapsed time = 0.003 seconds 00:07:41.963 11:19:16 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:07:41.963 00:07:41.963 00:07:41.963 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.963 http://cunit.sourceforge.net/ 00:07:41.963 00:07:41.963 00:07:41.963 Suite: ftl_mngt 00:07:41.963 Test: test_next_step ...passed 00:07:41.963 Test: test_continue_step ...passed 00:07:41.963 Test: test_get_func_and_step_cntx_alloc ...passed 00:07:41.963 Test: test_fail_step ...passed 00:07:41.963 Test: test_mngt_call_and_call_rollback ...passed 00:07:41.963 Test: test_nested_process_failure ...passed 00:07:41.963 Test: test_call_init_success ...passed 00:07:41.963 Test: test_call_init_failure ...passed 00:07:41.963 00:07:41.963 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.963 suites 1 1 n/a 0 0 00:07:41.963 tests 8 8 8 0 0 00:07:41.963 asserts 196 196 196 0 n/a 00:07:41.963 00:07:41.963 Elapsed time = 0.001 seconds 00:07:41.963 11:19:16 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:07:41.963 00:07:41.963 00:07:41.963 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.963 http://cunit.sourceforge.net/ 00:07:41.963 00:07:41.963 00:07:41.963 Suite: ftl_mempool 00:07:41.963 Test: test_ftl_mempool_create ...passed 00:07:41.963 Test: test_ftl_mempool_get_put ...passed 00:07:41.963 00:07:41.963 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.963 suites 1 1 n/a 0 0 00:07:41.963 tests 2 2 2 0 0 00:07:41.963 asserts 36 36 36 0 n/a 00:07:41.963 00:07:41.963 Elapsed time = 0.000 seconds 00:07:41.963 11:19:16 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:07:41.963 00:07:41.963 00:07:41.963 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.963 http://cunit.sourceforge.net/ 00:07:41.963 00:07:41.963 00:07:41.963 Suite: ftl_addr64_suite 00:07:41.963 Test: test_addr_cached ...passed 00:07:41.963 00:07:41.963 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.963 suites 1 1 n/a 0 0 00:07:41.963 tests 1 1 1 0 0 00:07:41.963 asserts 1536 1536 1536 0 n/a 00:07:41.963 00:07:41.963 Elapsed time = 0.000 seconds 00:07:41.963 11:19:16 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:07:41.963 00:07:41.963 00:07:41.963 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.963 http://cunit.sourceforge.net/ 00:07:41.963 00:07:41.963 00:07:41.963 Suite: ftl_sb 00:07:41.963 Test: test_sb_crc_v2 ...passed 00:07:41.963 Test: test_sb_crc_v3 ...passed 00:07:41.963 Test: test_sb_v3_md_layout ...[2024-07-13 11:19:16.581299] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:07:41.963 [2024-07-13 11:19:16.581652] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:41.963 [2024-07-13 11:19:16.581709] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:41.963 [2024-07-13 11:19:16.581742] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:07:41.963 [2024-07-13 11:19:16.581768] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:41.963 [2024-07-13 11:19:16.581848] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:07:41.963 [2024-07-13 11:19:16.581883] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:41.963 [2024-07-13 11:19:16.581927] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:07:41.964 [2024-07-13 11:19:16.582003] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:07:41.964 [2024-07-13 11:19:16.582036] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:41.964 [2024-07-13 11:19:16.582073] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:07:41.964 passed 00:07:41.964 Test: test_sb_v5_md_layout ...passed 00:07:41.964 00:07:41.964 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.964 suites 1 1 n/a 0 0 00:07:41.964 tests 4 4 4 0 0 00:07:41.964 asserts 160 160 160 0 n/a 00:07:41.964 00:07:41.964 Elapsed time = 0.002 seconds 00:07:41.964 11:19:16 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:07:41.964 00:07:41.964 00:07:41.964 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.964 http://cunit.sourceforge.net/ 00:07:41.964 00:07:41.964 00:07:41.964 Suite: ftl_layout_upgrade 00:07:41.964 Test: test_l2p_upgrade ...passed 00:07:41.964 00:07:41.964 Run Summary: Type Total Ran Passed Failed Inactive 00:07:41.964 suites 1 1 n/a 0 0 00:07:41.964 tests 1 1 1 0 0 00:07:41.964 asserts 152 152 152 0 n/a 00:07:41.964 00:07:41.964 Elapsed time = 0.000 seconds 00:07:41.964 11:19:16 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:07:41.964 00:07:41.964 00:07:41.964 CUnit - A unit testing framework for C - Version 2.1-3 00:07:41.964 http://cunit.sourceforge.net/ 00:07:41.964 00:07:41.964 00:07:41.964 Suite: ftl_p2l_suite 00:07:41.964 Test: test_p2l_num_pages ...passed 00:07:42.530 Test: test_ckpt_issue ...passed 00:07:42.789 Test: test_persist_band_p2l ...passed 00:07:43.356 Test: test_clean_restore_p2l ...passed 00:07:44.292 Test: test_dirty_restore_p2l ...passed 00:07:44.292 00:07:44.292 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.292 suites 1 1 n/a 0 0 00:07:44.292 tests 5 5 5 0 0 00:07:44.292 asserts 10020 10020 10020 0 n/a 00:07:44.292 00:07:44.292 Elapsed time = 2.140 seconds 00:07:44.292 00:07:44.292 real 0m2.666s 00:07:44.292 user 0m0.921s 00:07:44.292 sys 0m1.746s 00:07:44.292 ************************************ 00:07:44.292 END TEST unittest_ftl 00:07:44.292 ************************************ 00:07:44.292 11:19:18 unittest.unittest_ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.292 11:19:18 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:07:44.292 11:19:18 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:44.292 11:19:18 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:44.292 11:19:18 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.292 11:19:18 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.292 11:19:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:44.292 ************************************ 00:07:44.292 START TEST unittest_accel 00:07:44.292 ************************************ 00:07:44.292 11:19:18 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:07:44.292 00:07:44.292 00:07:44.292 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.292 http://cunit.sourceforge.net/ 00:07:44.292 00:07:44.292 00:07:44.292 Suite: accel_sequence 00:07:44.292 Test: test_sequence_fill_copy ...passed 00:07:44.292 Test: test_sequence_abort ...passed 00:07:44.292 Test: test_sequence_append_error ...passed 00:07:44.292 Test: test_sequence_completion_error ...[2024-07-13 11:19:18.893400] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1945:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fef671427c0 00:07:44.292 [2024-07-13 11:19:18.893847] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1945:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fef671427c0 00:07:44.292 [2024-07-13 11:19:18.893958] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1855:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fef671427c0 00:07:44.292 [2024-07-13 11:19:18.894035] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1855:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fef671427c0 00:07:44.292 passed 00:07:44.292 Test: test_sequence_decompress ...passed 00:07:44.292 Test: test_sequence_reverse ...passed 00:07:44.292 Test: test_sequence_copy_elision ...passed 00:07:44.292 Test: test_sequence_accel_buffers ...passed 00:07:44.292 Test: test_sequence_memory_domain ...[2024-07-13 11:19:18.907111] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1747:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:07:44.292 [2024-07-13 11:19:18.907341] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1786:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:07:44.292 passed 00:07:44.292 Test: test_sequence_module_memory_domain ...passed 00:07:44.292 Test: test_sequence_crypto ...passed 00:07:44.292 Test: test_sequence_driver ...[2024-07-13 11:19:18.914748] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1894:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fef6622f7c0 using driver: ut 00:07:44.292 [2024-07-13 11:19:18.914903] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1958:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fef6622f7c0 through driver: ut 00:07:44.292 passed 00:07:44.292 Test: test_sequence_same_iovs ...passed 00:07:44.292 Test: test_sequence_crc32 ...passed 00:07:44.292 Suite: accel 00:07:44.292 Test: test_spdk_accel_task_complete ...passed 00:07:44.292 Test: test_get_task ...passed 00:07:44.292 Test: test_spdk_accel_submit_copy ...passed 00:07:44.292 Test: test_spdk_accel_submit_dualcast ...[2024-07-13 11:19:18.920826] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:44.292 passed 00:07:44.292 Test: test_spdk_accel_submit_compare ...passed 00:07:44.292 Test: test_spdk_accel_submit_fill ...passed 00:07:44.292 Test: test_spdk_accel_submit_crc32c ...passed 00:07:44.292 Test: test_spdk_accel_submit_crc32cv ...[2024-07-13 11:19:18.920891] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:07:44.292 passed 00:07:44.292 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:07:44.292 Test: test_spdk_accel_submit_xor ...passed 00:07:44.292 Test: test_spdk_accel_module_find_by_name ...passed 00:07:44.292 Test: test_spdk_accel_module_register ...passed 00:07:44.292 00:07:44.292 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.292 suites 2 2 n/a 0 0 00:07:44.292 tests 26 26 26 0 0 00:07:44.292 asserts 830 830 830 0 n/a 00:07:44.292 00:07:44.292 Elapsed time = 0.040 seconds 00:07:44.292 00:07:44.292 real 0m0.085s 00:07:44.292 user 0m0.040s 00:07:44.292 sys 0m0.045s 00:07:44.292 11:19:18 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.292 11:19:18 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.292 ************************************ 00:07:44.292 END TEST unittest_accel 00:07:44.292 ************************************ 00:07:44.292 11:19:18 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:44.292 11:19:18 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:44.292 11:19:18 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.292 11:19:18 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.292 11:19:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:44.292 ************************************ 00:07:44.292 START TEST unittest_ioat 00:07:44.292 ************************************ 00:07:44.292 11:19:18 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:07:44.292 00:07:44.292 00:07:44.292 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.292 http://cunit.sourceforge.net/ 00:07:44.292 00:07:44.292 00:07:44.292 Suite: ioat 00:07:44.292 Test: ioat_state_check ...passed 00:07:44.292 00:07:44.292 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.292 suites 1 1 n/a 0 0 00:07:44.292 tests 1 1 1 0 0 00:07:44.292 asserts 32 32 32 0 n/a 00:07:44.292 00:07:44.292 Elapsed time = 0.000 seconds 00:07:44.292 00:07:44.292 real 0m0.032s 00:07:44.292 user 0m0.016s 00:07:44.292 sys 0m0.016s 00:07:44.292 11:19:19 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.292 11:19:19 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:07:44.292 ************************************ 00:07:44.292 END TEST unittest_ioat 00:07:44.292 ************************************ 00:07:44.551 11:19:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:44.551 11:19:19 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:44.551 11:19:19 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:44.551 11:19:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.551 11:19:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.551 11:19:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:44.551 ************************************ 00:07:44.551 START TEST unittest_idxd_user 00:07:44.551 ************************************ 00:07:44.551 11:19:19 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:07:44.551 00:07:44.551 00:07:44.551 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.551 http://cunit.sourceforge.net/ 00:07:44.551 00:07:44.551 00:07:44.551 Suite: idxd_user 00:07:44.551 Test: test_idxd_wait_cmd ...[2024-07-13 11:19:19.095829] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:44.551 [2024-07-13 11:19:19.096730] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:07:44.551 passed 00:07:44.551 Test: test_idxd_reset_dev ...[2024-07-13 11:19:19.097033] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:07:44.551 [2024-07-13 11:19:19.097204] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:07:44.551 passed 00:07:44.551 Test: test_idxd_group_config ...passed 00:07:44.551 Test: test_idxd_wq_config ...passed 00:07:44.551 00:07:44.551 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.551 suites 1 1 n/a 0 0 00:07:44.551 tests 4 4 4 0 0 00:07:44.551 asserts 20 20 20 0 n/a 00:07:44.551 00:07:44.551 Elapsed time = 0.001 seconds 00:07:44.551 00:07:44.551 real 0m0.031s 00:07:44.551 user 0m0.020s 00:07:44.551 sys 0m0.010s 00:07:44.551 11:19:19 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.551 ************************************ 00:07:44.552 END TEST unittest_idxd_user 00:07:44.552 ************************************ 00:07:44.552 11:19:19 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:07:44.552 11:19:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:44.552 11:19:19 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:07:44.552 11:19:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.552 11:19:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.552 11:19:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:44.552 ************************************ 00:07:44.552 START TEST unittest_iscsi 00:07:44.552 ************************************ 00:07:44.552 11:19:19 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:07:44.552 11:19:19 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:07:44.552 00:07:44.552 00:07:44.552 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.552 http://cunit.sourceforge.net/ 00:07:44.552 00:07:44.552 00:07:44.552 Suite: conn_suite 00:07:44.552 Test: read_task_split_in_order_case ...passed 00:07:44.552 Test: read_task_split_reverse_order_case ...passed 00:07:44.552 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:07:44.552 Test: process_non_read_task_completion_test ...passed 00:07:44.552 Test: free_tasks_on_connection ...passed 00:07:44.552 Test: free_tasks_with_queued_datain ...passed 00:07:44.552 Test: abort_queued_datain_task_test ...passed 00:07:44.552 Test: abort_queued_datain_tasks_test ...passed 00:07:44.552 00:07:44.552 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.552 suites 1 1 n/a 0 0 00:07:44.552 tests 8 8 8 0 0 00:07:44.552 asserts 230 230 230 0 n/a 00:07:44.552 00:07:44.552 Elapsed time = 0.000 seconds 00:07:44.552 11:19:19 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:07:44.552 00:07:44.552 00:07:44.552 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.552 http://cunit.sourceforge.net/ 00:07:44.552 00:07:44.552 00:07:44.552 Suite: iscsi_suite 00:07:44.552 Test: param_negotiation_test ...passed 00:07:44.552 Test: list_negotiation_test ...passed 00:07:44.552 Test: parse_valid_test ...passed 00:07:44.552 Test: parse_invalid_test ...[2024-07-13 11:19:19.223135] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:44.552 [2024-07-13 11:19:19.223471] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:07:44.552 [2024-07-13 11:19:19.223524] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:07:44.552 [2024-07-13 11:19:19.223609] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:07:44.552 [2024-07-13 11:19:19.223775] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:07:44.552 [2024-07-13 11:19:19.223834] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:07:44.552 passed 00:07:44.552 00:07:44.552 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.552 suites 1 1 n/a 0 0 00:07:44.552 tests 4 4 4 0 0 00:07:44.552 asserts 161 161 161 0 n/a 00:07:44.552 00:07:44.552 Elapsed time = 0.005 seconds 00:07:44.552 [2024-07-13 11:19:19.223982] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:07:44.552 11:19:19 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:07:44.552 00:07:44.552 00:07:44.552 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.552 http://cunit.sourceforge.net/ 00:07:44.552 00:07:44.552 00:07:44.552 Suite: iscsi_target_node_suite 00:07:44.552 Test: add_lun_test_cases ...[2024-07-13 11:19:19.258128] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:07:44.552 [2024-07-13 11:19:19.258499] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:07:44.552 [2024-07-13 11:19:19.258609] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:44.552 [2024-07-13 11:19:19.258650] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:07:44.552 [2024-07-13 11:19:19.258677] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:07:44.552 passed 00:07:44.552 Test: allow_any_allowed ...passed 00:07:44.552 Test: allow_ipv6_allowed ...passed 00:07:44.552 Test: allow_ipv6_denied ...passed 00:07:44.552 Test: allow_ipv6_invalid ...passed 00:07:44.552 Test: allow_ipv4_allowed ...passed 00:07:44.552 Test: allow_ipv4_denied ...passed 00:07:44.552 Test: allow_ipv4_invalid ...passed 00:07:44.552 Test: node_access_allowed ...passed 00:07:44.552 Test: node_access_denied_by_empty_netmask ...passed 00:07:44.552 Test: node_access_multi_initiator_groups_cases ...passed 00:07:44.552 Test: allow_iscsi_name_multi_maps_case ...passed 00:07:44.552 Test: chap_param_test_cases ...[2024-07-13 11:19:19.259210] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:07:44.552 [2024-07-13 11:19:19.259262] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:07:44.552 passed 00:07:44.552 00:07:44.552 [2024-07-13 11:19:19.259353] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:07:44.552 [2024-07-13 11:19:19.259381] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:07:44.552 [2024-07-13 11:19:19.259412] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:07:44.552 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.552 suites 1 1 n/a 0 0 00:07:44.552 tests 13 13 13 0 0 00:07:44.552 asserts 50 50 50 0 n/a 00:07:44.552 00:07:44.552 Elapsed time = 0.001 seconds 00:07:44.552 11:19:19 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:07:44.811 00:07:44.811 00:07:44.811 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.811 http://cunit.sourceforge.net/ 00:07:44.811 00:07:44.811 00:07:44.811 Suite: iscsi_suite 00:07:44.811 Test: op_login_check_target_test ...[2024-07-13 11:19:19.295328] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:07:44.811 passed 00:07:44.811 Test: op_login_session_normal_test ...[2024-07-13 11:19:19.295881] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:44.811 [2024-07-13 11:19:19.295951] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:44.811 [2024-07-13 11:19:19.295993] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:07:44.811 [2024-07-13 11:19:19.296047] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:07:44.811 [2024-07-13 11:19:19.296163] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:44.811 [2024-07-13 11:19:19.296270] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:07:44.811 passed 00:07:44.811 Test: maxburstlength_test ...[2024-07-13 11:19:19.296338] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:07:44.811 [2024-07-13 11:19:19.296644] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:44.811 passed 00:07:44.811 Test: underflow_for_read_transfer_test ...[2024-07-13 11:19:19.296716] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:07:44.811 passed 00:07:44.811 Test: underflow_for_zero_read_transfer_test ...passed 00:07:44.811 Test: underflow_for_request_sense_test ...passed 00:07:44.811 Test: underflow_for_check_condition_test ...passed 00:07:44.811 Test: add_transfer_task_test ...passed 00:07:44.811 Test: get_transfer_task_test ...passed 00:07:44.811 Test: del_transfer_task_test ...passed 00:07:44.811 Test: clear_all_transfer_tasks_test ...passed 00:07:44.811 Test: build_iovs_test ...passed 00:07:44.811 Test: build_iovs_with_md_test ...passed 00:07:44.811 Test: pdu_hdr_op_login_test ...[2024-07-13 11:19:19.298428] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:07:44.812 [2024-07-13 11:19:19.298577] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:07:44.812 [2024-07-13 11:19:19.298668] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:07:44.812 passed 00:07:44.812 Test: pdu_hdr_op_text_test ...[2024-07-13 11:19:19.298789] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:44.812 [2024-07-13 11:19:19.298901] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:07:44.812 [2024-07-13 11:19:19.298953] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:07:44.812 passed 00:07:44.812 Test: pdu_hdr_op_logout_test ...[2024-07-13 11:19:19.299091] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:07:44.812 passed 00:07:44.812 Test: pdu_hdr_op_scsi_test ...[2024-07-13 11:19:19.299296] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:44.812 [2024-07-13 11:19:19.299341] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:07:44.812 [2024-07-13 11:19:19.299388] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:07:44.812 [2024-07-13 11:19:19.299503] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:07:44.812 [2024-07-13 11:19:19.299588] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:07:44.812 [2024-07-13 11:19:19.299765] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:44.812 passed 00:07:44.812 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-13 11:19:19.299872] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:07:44.812 [2024-07-13 11:19:19.299948] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:07:44.812 passed 00:07:44.812 Test: pdu_hdr_op_nopout_test ...[2024-07-13 11:19:19.300201] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:07:44.812 [2024-07-13 11:19:19.300289] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:44.812 passed 00:07:44.812 Test: pdu_hdr_op_data_test ...[2024-07-13 11:19:19.300316] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:07:44.812 [2024-07-13 11:19:19.300346] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:07:44.812 [2024-07-13 11:19:19.300382] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:07:44.812 [2024-07-13 11:19:19.300440] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:44.812 [2024-07-13 11:19:19.300522] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:44.812 [2024-07-13 11:19:19.300576] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:07:44.812 [2024-07-13 11:19:19.300632] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:07:44.812 [2024-07-13 11:19:19.300724] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:07:44.812 [2024-07-13 11:19:19.300756] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:07:44.812 passed 00:07:44.812 Test: empty_text_with_cbit_test ...passed 00:07:44.812 Test: pdu_payload_read_test ...[2024-07-13 11:19:19.302957] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:07:44.812 passed 00:07:44.812 Test: data_out_pdu_sequence_test ...passed 00:07:44.812 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:07:44.812 00:07:44.812 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.812 suites 1 1 n/a 0 0 00:07:44.812 tests 24 24 24 0 0 00:07:44.812 asserts 150253 150253 150253 0 n/a 00:07:44.812 00:07:44.812 Elapsed time = 0.018 seconds 00:07:44.812 11:19:19 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:07:44.812 00:07:44.812 00:07:44.812 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.812 http://cunit.sourceforge.net/ 00:07:44.812 00:07:44.812 00:07:44.812 Suite: init_grp_suite 00:07:44.812 Test: create_initiator_group_success_case ...passed 00:07:44.812 Test: find_initiator_group_success_case ...passed 00:07:44.812 Test: register_initiator_group_twice_case ...passed 00:07:44.812 Test: add_initiator_name_success_case ...passed 00:07:44.812 Test: add_initiator_name_fail_case ...[2024-07-13 11:19:19.352127] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:07:44.812 passed 00:07:44.812 Test: delete_all_initiator_names_success_case ...passed 00:07:44.812 Test: add_netmask_success_case ...passed 00:07:44.812 Test: add_netmask_fail_case ...[2024-07-13 11:19:19.352554] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:07:44.812 passed 00:07:44.812 Test: delete_all_netmasks_success_case ...passed 00:07:44.812 Test: initiator_name_overwrite_all_to_any_case ...passed 00:07:44.812 Test: netmask_overwrite_all_to_any_case ...passed 00:07:44.812 Test: add_delete_initiator_names_case ...passed 00:07:44.812 Test: add_duplicated_initiator_names_case ...passed 00:07:44.812 Test: delete_nonexisting_initiator_names_case ...passed 00:07:44.812 Test: add_delete_netmasks_case ...passed 00:07:44.812 Test: add_duplicated_netmasks_case ...passed 00:07:44.812 Test: delete_nonexisting_netmasks_case ...passed 00:07:44.812 00:07:44.812 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.812 suites 1 1 n/a 0 0 00:07:44.812 tests 17 17 17 0 0 00:07:44.812 asserts 108 108 108 0 n/a 00:07:44.812 00:07:44.812 Elapsed time = 0.001 seconds 00:07:44.812 11:19:19 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:07:44.812 00:07:44.812 00:07:44.812 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.812 http://cunit.sourceforge.net/ 00:07:44.812 00:07:44.812 00:07:44.812 Suite: portal_grp_suite 00:07:44.812 Test: portal_create_ipv4_normal_case ...passed 00:07:44.812 Test: portal_create_ipv6_normal_case ...passed 00:07:44.812 Test: portal_create_ipv4_wildcard_case ...passed 00:07:44.812 Test: portal_create_ipv6_wildcard_case ...passed 00:07:44.812 Test: portal_create_twice_case ...[2024-07-13 11:19:19.389075] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:07:44.812 passed 00:07:44.812 Test: portal_grp_register_unregister_case ...passed 00:07:44.812 Test: portal_grp_register_twice_case ...passed 00:07:44.812 Test: portal_grp_add_delete_case ...passed 00:07:44.812 Test: portal_grp_add_delete_twice_case ...passed 00:07:44.812 00:07:44.812 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.812 suites 1 1 n/a 0 0 00:07:44.812 tests 9 9 9 0 0 00:07:44.812 asserts 44 44 44 0 n/a 00:07:44.812 00:07:44.812 Elapsed time = 0.003 seconds 00:07:44.812 00:07:44.812 real 0m0.235s 00:07:44.812 user 0m0.133s 00:07:44.812 sys 0m0.104s 00:07:44.812 11:19:19 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.812 11:19:19 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:07:44.812 ************************************ 00:07:44.812 END TEST unittest_iscsi 00:07:44.812 ************************************ 00:07:44.812 11:19:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:44.812 11:19:19 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:07:44.812 11:19:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:44.813 11:19:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.813 11:19:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:44.813 ************************************ 00:07:44.813 START TEST unittest_json 00:07:44.813 ************************************ 00:07:44.813 11:19:19 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:07:44.813 11:19:19 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:07:44.813 00:07:44.813 00:07:44.813 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.813 http://cunit.sourceforge.net/ 00:07:44.813 00:07:44.813 00:07:44.813 Suite: json 00:07:44.813 Test: test_parse_literal ...passed 00:07:44.813 Test: test_parse_string_simple ...passed 00:07:44.813 Test: test_parse_string_control_chars ...passed 00:07:44.813 Test: test_parse_string_utf8 ...passed 00:07:44.813 Test: test_parse_string_escapes_twochar ...passed 00:07:44.813 Test: test_parse_string_escapes_unicode ...passed 00:07:44.813 Test: test_parse_number ...passed 00:07:44.813 Test: test_parse_array ...passed 00:07:44.813 Test: test_parse_object ...passed 00:07:44.813 Test: test_parse_nesting ...passed 00:07:44.813 Test: test_parse_comment ...passed 00:07:44.813 00:07:44.813 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.813 suites 1 1 n/a 0 0 00:07:44.813 tests 11 11 11 0 0 00:07:44.813 asserts 1516 1516 1516 0 n/a 00:07:44.813 00:07:44.813 Elapsed time = 0.002 seconds 00:07:44.813 11:19:19 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:07:44.813 00:07:44.813 00:07:44.813 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.813 http://cunit.sourceforge.net/ 00:07:44.813 00:07:44.813 00:07:44.813 Suite: json 00:07:44.813 Test: test_strequal ...passed 00:07:44.813 Test: test_num_to_uint16 ...passed 00:07:44.813 Test: test_num_to_int32 ...passed 00:07:44.813 Test: test_num_to_uint64 ...passed 00:07:44.813 Test: test_decode_object ...passed 00:07:44.813 Test: test_decode_array ...passed 00:07:44.813 Test: test_decode_bool ...passed 00:07:44.813 Test: test_decode_uint16 ...passed 00:07:44.813 Test: test_decode_int32 ...passed 00:07:44.813 Test: test_decode_uint32 ...passed 00:07:44.813 Test: test_decode_uint64 ...passed 00:07:44.813 Test: test_decode_string ...passed 00:07:44.813 Test: test_decode_uuid ...passed 00:07:44.813 Test: test_find ...passed 00:07:44.813 Test: test_find_array ...passed 00:07:44.813 Test: test_iterating ...passed 00:07:44.813 Test: test_free_object ...passed 00:07:44.813 00:07:44.813 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.813 suites 1 1 n/a 0 0 00:07:44.813 tests 17 17 17 0 0 00:07:44.813 asserts 236 236 236 0 n/a 00:07:44.813 00:07:44.813 Elapsed time = 0.001 seconds 00:07:44.813 11:19:19 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:07:44.813 00:07:44.813 00:07:44.813 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.813 http://cunit.sourceforge.net/ 00:07:44.813 00:07:44.813 00:07:44.813 Suite: json 00:07:44.813 Test: test_write_literal ...passed 00:07:44.813 Test: test_write_string_simple ...passed 00:07:44.813 Test: test_write_string_escapes ...passed 00:07:44.813 Test: test_write_string_utf16le ...passed 00:07:44.813 Test: test_write_number_int32 ...passed 00:07:44.813 Test: test_write_number_uint32 ...passed 00:07:44.813 Test: test_write_number_uint128 ...passed 00:07:44.813 Test: test_write_string_number_uint128 ...passed 00:07:44.813 Test: test_write_number_int64 ...passed 00:07:44.813 Test: test_write_number_uint64 ...passed 00:07:44.813 Test: test_write_number_double ...passed 00:07:44.813 Test: test_write_uuid ...passed 00:07:44.813 Test: test_write_array ...passed 00:07:44.813 Test: test_write_object ...passed 00:07:44.813 Test: test_write_nesting ...passed 00:07:44.813 Test: test_write_val ...passed 00:07:44.813 00:07:44.813 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.813 suites 1 1 n/a 0 0 00:07:44.813 tests 16 16 16 0 0 00:07:44.813 asserts 918 918 918 0 n/a 00:07:44.813 00:07:44.813 Elapsed time = 0.004 seconds 00:07:45.072 11:19:19 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:07:45.072 00:07:45.072 00:07:45.072 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.072 http://cunit.sourceforge.net/ 00:07:45.072 00:07:45.072 00:07:45.072 Suite: jsonrpc 00:07:45.072 Test: test_parse_request ...passed 00:07:45.072 Test: test_parse_request_streaming ...passed 00:07:45.072 00:07:45.072 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.073 suites 1 1 n/a 0 0 00:07:45.073 tests 2 2 2 0 0 00:07:45.073 asserts 289 289 289 0 n/a 00:07:45.073 00:07:45.073 Elapsed time = 0.004 seconds 00:07:45.073 00:07:45.073 real 0m0.142s 00:07:45.073 user 0m0.067s 00:07:45.073 sys 0m0.077s 00:07:45.073 11:19:19 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.073 ************************************ 00:07:45.073 END TEST unittest_json 00:07:45.073 ************************************ 00:07:45.073 11:19:19 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:45.073 11:19:19 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:45.073 ************************************ 00:07:45.073 START TEST unittest_rpc 00:07:45.073 ************************************ 00:07:45.073 11:19:19 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:07:45.073 11:19:19 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:07:45.073 00:07:45.073 00:07:45.073 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.073 http://cunit.sourceforge.net/ 00:07:45.073 00:07:45.073 00:07:45.073 Suite: rpc 00:07:45.073 Test: test_jsonrpc_handler ...passed 00:07:45.073 Test: test_spdk_rpc_is_method_allowed ...passed 00:07:45.073 Test: test_rpc_get_methods ...passed 00:07:45.073 Test: test_rpc_spdk_get_version ...passed 00:07:45.073 Test: test_spdk_rpc_listen_close ...passed 00:07:45.073 Test: test_rpc_run_multiple_servers ...[2024-07-13 11:19:19.661304] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:07:45.073 passed 00:07:45.073 00:07:45.073 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.073 suites 1 1 n/a 0 0 00:07:45.073 tests 6 6 6 0 0 00:07:45.073 asserts 23 23 23 0 n/a 00:07:45.073 00:07:45.073 Elapsed time = 0.000 seconds 00:07:45.073 00:07:45.073 real 0m0.028s 00:07:45.073 user 0m0.016s 00:07:45.073 sys 0m0.012s 00:07:45.073 11:19:19 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.073 11:19:19 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:45.073 ************************************ 00:07:45.073 END TEST unittest_rpc 00:07:45.073 ************************************ 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:45.073 11:19:19 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:45.073 ************************************ 00:07:45.073 START TEST unittest_notify 00:07:45.073 ************************************ 00:07:45.073 11:19:19 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:07:45.073 00:07:45.073 00:07:45.073 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.073 http://cunit.sourceforge.net/ 00:07:45.073 00:07:45.073 00:07:45.073 Suite: app_suite 00:07:45.073 Test: notify ...passed 00:07:45.073 00:07:45.073 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.073 suites 1 1 n/a 0 0 00:07:45.073 tests 1 1 1 0 0 00:07:45.073 asserts 13 13 13 0 n/a 00:07:45.073 00:07:45.073 Elapsed time = 0.000 seconds 00:07:45.073 00:07:45.073 real 0m0.031s 00:07:45.073 user 0m0.018s 00:07:45.073 sys 0m0.012s 00:07:45.073 11:19:19 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.073 ************************************ 00:07:45.073 11:19:19 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:07:45.073 END TEST unittest_notify 00:07:45.073 ************************************ 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:45.073 11:19:19 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.073 11:19:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:45.073 ************************************ 00:07:45.073 START TEST unittest_nvme 00:07:45.073 ************************************ 00:07:45.073 11:19:19 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:07:45.073 11:19:19 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:07:45.333 00:07:45.333 00:07:45.333 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.333 http://cunit.sourceforge.net/ 00:07:45.333 00:07:45.333 00:07:45.333 Suite: nvme 00:07:45.333 Test: test_opc_data_transfer ...passed 00:07:45.333 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:07:45.333 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:07:45.333 Test: test_trid_parse_and_compare ...[2024-07-13 11:19:19.824570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:07:45.333 [2024-07-13 11:19:19.825000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:45.333 [2024-07-13 11:19:19.825142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:07:45.333 [2024-07-13 11:19:19.825182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:45.333 [2024-07-13 11:19:19.825251] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:07:45.333 [2024-07-13 11:19:19.825360] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:07:45.333 passed 00:07:45.333 Test: test_trid_trtype_str ...passed 00:07:45.333 Test: test_trid_adrfam_str ...passed 00:07:45.333 Test: test_nvme_ctrlr_probe ...[2024-07-13 11:19:19.825692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:45.333 passed 00:07:45.333 Test: test_spdk_nvme_probe ...[2024-07-13 11:19:19.825817] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:45.333 [2024-07-13 11:19:19.825850] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:45.333 [2024-07-13 11:19:19.825961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:07:45.333 passed 00:07:45.333 Test: test_spdk_nvme_connect ...[2024-07-13 11:19:19.826007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:07:45.333 [2024-07-13 11:19:19.826114] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:07:45.333 [2024-07-13 11:19:19.826575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:45.333 passed 00:07:45.333 Test: test_nvme_ctrlr_probe_internal ...[2024-07-13 11:19:19.826767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:07:45.333 passed 00:07:45.333 Test: test_nvme_init_controllers ...[2024-07-13 11:19:19.826814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:07:45.333 [2024-07-13 11:19:19.826940] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:07:45.333 passed 00:07:45.333 Test: test_nvme_driver_init ...[2024-07-13 11:19:19.827082] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:07:45.333 [2024-07-13 11:19:19.827124] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:07:45.333 [2024-07-13 11:19:19.940833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:07:45.333 passed 00:07:45.333 Test: test_spdk_nvme_detach ...[2024-07-13 11:19:19.941037] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:07:45.333 passed 00:07:45.333 Test: test_nvme_completion_poll_cb ...passed 00:07:45.333 Test: test_nvme_user_copy_cmd_complete ...passed 00:07:45.333 Test: test_nvme_allocate_request_null ...passed 00:07:45.333 Test: test_nvme_allocate_request ...passed 00:07:45.333 Test: test_nvme_free_request ...passed 00:07:45.333 Test: test_nvme_allocate_request_user_copy ...passed 00:07:45.333 Test: test_nvme_robust_mutex_init_shared ...passed 00:07:45.333 Test: test_nvme_request_check_timeout ...passed 00:07:45.333 Test: test_nvme_wait_for_completion ...passed 00:07:45.333 Test: test_spdk_nvme_parse_func ...passed 00:07:45.333 Test: test_spdk_nvme_detach_async ...passed 00:07:45.333 Test: test_nvme_parse_addr ...[2024-07-13 11:19:19.941905] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:07:45.333 passed 00:07:45.333 00:07:45.333 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.333 suites 1 1 n/a 0 0 00:07:45.333 tests 25 25 25 0 0 00:07:45.333 asserts 326 326 326 0 n/a 00:07:45.333 00:07:45.333 Elapsed time = 0.007 seconds 00:07:45.333 11:19:19 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:07:45.333 00:07:45.333 00:07:45.333 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.333 http://cunit.sourceforge.net/ 00:07:45.333 00:07:45.333 00:07:45.333 Suite: nvme_ctrlr 00:07:45.333 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-13 11:19:19.977077] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.333 passed 00:07:45.333 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-13 11:19:19.978887] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.333 passed 00:07:45.333 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-13 11:19:19.980224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.333 passed 00:07:45.333 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-13 11:19:19.981529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.333 passed 00:07:45.333 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-13 11:19:19.983014] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.333 [2024-07-13 11:19:19.984285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 11:19:19.985543] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 11:19:19.986782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:45.333 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-13 11:19:19.989333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.333 [2024-07-13 11:19:19.991689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 11:19:19.992909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:45.333 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-13 11:19:19.995452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.334 [2024-07-13 11:19:19.996738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-13 11:19:19.999218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:07:45.334 Test: test_nvme_ctrlr_init_delay ...[2024-07-13 11:19:20.001892] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.334 passed 00:07:45.334 Test: test_alloc_io_qpair_rr_1 ...[2024-07-13 11:19:20.003266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.334 [2024-07-13 11:19:20.003546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:45.334 [2024-07-13 11:19:20.003756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:45.334 passed 00:07:45.334 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-07-13 11:19:20.003840] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:45.334 [2024-07-13 11:19:20.003879] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:07:45.334 passed 00:07:45.334 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:07:45.334 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-13 11:19:20.004035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.334 passed 00:07:45.334 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-13 11:19:20.004272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.334 [2024-07-13 11:19:20.004415] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:07:45.334 passed 00:07:45.334 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-13 11:19:20.004773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:07:45.334 [2024-07-13 11:19:20.004960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:45.334 [2024-07-13 11:19:20.005072] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:07:45.334 passed 00:07:45.334 Test: test_nvme_ctrlr_fail ...passed 00:07:45.334 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...[2024-07-13 11:19:20.005156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:07:45.334 [2024-07-13 11:19:20.005226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:07:45.334 passed 00:07:45.334 Test: test_nvme_ctrlr_set_supported_features ...passed 00:07:45.334 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-13 11:19:20.005418] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.334 passed 00:07:45.334 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:07:45.334 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-13 11:19:20.007029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:07:45.903 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:07:45.903 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:07:45.903 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-13 11:19:20.336722] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-13 11:19:20.344284] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-13 11:19:20.345593] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 [2024-07-13 11:19:20.345732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3002:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:07:45.903 passed 00:07:45.903 Test: test_alloc_io_qpair_fail ...[2024-07-13 11:19:20.346962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_add_remove_process ...passed 00:07:45.903 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:07:45.903 Test: test_nvme_ctrlr_set_state ...[2024-07-13 11:19:20.347080] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:07:45.903 [2024-07-13 11:19:20.347232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1546:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-13 11:19:20.347285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-13 11:19:20.370295] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-13 11:19:20.414774] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_reset ...[2024-07-13 11:19:20.416493] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_aer_callback ...[2024-07-13 11:19:20.416921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-13 11:19:20.418449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:07:45.903 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:07:45.903 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-13 11:19:20.420402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:07:45.903 Test: test_nvme_ctrlr_ana_resize ...[2024-07-13 11:19:20.421872] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:07:45.903 Test: test_nvme_transport_ctrlr_ready ...[2024-07-13 11:19:20.423584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:07:45.903 [2024-07-13 11:19:20.423642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4204:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:07:45.903 passed 00:07:45.903 Test: test_nvme_ctrlr_disable ...[2024-07-13 11:19:20.423692] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:07:45.903 passed 00:07:45.903 00:07:45.903 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.903 suites 1 1 n/a 0 0 00:07:45.903 tests 44 44 44 0 0 00:07:45.903 asserts 10434 10434 10434 0 n/a 00:07:45.903 00:07:45.903 Elapsed time = 0.405 seconds 00:07:45.903 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:07:45.903 00:07:45.903 00:07:45.903 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.903 http://cunit.sourceforge.net/ 00:07:45.903 00:07:45.903 00:07:45.903 Suite: nvme_ctrlr_cmd 00:07:45.903 Test: test_get_log_pages ...passed 00:07:45.903 Test: test_set_feature_cmd ...passed 00:07:45.903 Test: test_set_feature_ns_cmd ...passed 00:07:45.903 Test: test_get_feature_cmd ...passed 00:07:45.903 Test: test_get_feature_ns_cmd ...passed 00:07:45.903 Test: test_abort_cmd ...passed 00:07:45.903 Test: test_set_host_id_cmds ...[2024-07-13 11:19:20.473530] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:07:45.903 passed 00:07:45.903 Test: test_io_cmd_raw_no_payload_build ...passed 00:07:45.903 Test: test_io_raw_cmd ...passed 00:07:45.903 Test: test_io_raw_cmd_with_md ...passed 00:07:45.903 Test: test_namespace_attach ...passed 00:07:45.903 Test: test_namespace_detach ...passed 00:07:45.903 Test: test_namespace_create ...passed 00:07:45.903 Test: test_namespace_delete ...passed 00:07:45.903 Test: test_doorbell_buffer_config ...passed 00:07:45.903 Test: test_format_nvme ...passed 00:07:45.903 Test: test_fw_commit ...passed 00:07:45.903 Test: test_fw_image_download ...passed 00:07:45.903 Test: test_sanitize ...passed 00:07:45.903 Test: test_directive ...passed 00:07:45.903 Test: test_nvme_request_add_abort ...passed 00:07:45.903 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:07:45.903 Test: test_nvme_ctrlr_cmd_identify ...passed 00:07:45.903 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:07:45.903 00:07:45.903 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.903 suites 1 1 n/a 0 0 00:07:45.903 tests 24 24 24 0 0 00:07:45.903 asserts 198 198 198 0 n/a 00:07:45.903 00:07:45.903 Elapsed time = 0.001 seconds 00:07:45.903 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:07:45.903 00:07:45.903 00:07:45.903 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.903 http://cunit.sourceforge.net/ 00:07:45.903 00:07:45.903 00:07:45.903 Suite: nvme_ctrlr_cmd 00:07:45.903 Test: test_geometry_cmd ...passed 00:07:45.903 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:07:45.903 00:07:45.903 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.903 suites 1 1 n/a 0 0 00:07:45.903 tests 2 2 2 0 0 00:07:45.903 asserts 7 7 7 0 n/a 00:07:45.903 00:07:45.903 Elapsed time = 0.000 seconds 00:07:45.903 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:07:45.903 00:07:45.903 00:07:45.903 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.903 http://cunit.sourceforge.net/ 00:07:45.903 00:07:45.903 00:07:45.903 Suite: nvme 00:07:45.903 Test: test_nvme_ns_construct ...passed 00:07:45.903 Test: test_nvme_ns_uuid ...passed 00:07:45.903 Test: test_nvme_ns_csi ...passed 00:07:45.903 Test: test_nvme_ns_data ...passed 00:07:45.903 Test: test_nvme_ns_set_identify_data ...passed 00:07:45.903 Test: test_spdk_nvme_ns_get_values ...passed 00:07:45.903 Test: test_spdk_nvme_ns_is_active ...passed 00:07:45.903 Test: spdk_nvme_ns_supports ...passed 00:07:45.903 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:07:45.903 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:07:45.903 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:07:45.903 Test: test_nvme_ns_find_id_desc ...passed 00:07:45.903 00:07:45.903 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.903 suites 1 1 n/a 0 0 00:07:45.903 tests 12 12 12 0 0 00:07:45.903 asserts 95 95 95 0 n/a 00:07:45.903 00:07:45.904 Elapsed time = 0.001 seconds 00:07:45.904 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:07:45.904 00:07:45.904 00:07:45.904 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.904 http://cunit.sourceforge.net/ 00:07:45.904 00:07:45.904 00:07:45.904 Suite: nvme_ns_cmd 00:07:45.904 Test: split_test ...passed 00:07:45.904 Test: split_test2 ...passed 00:07:45.904 Test: split_test3 ...passed 00:07:45.904 Test: split_test4 ...passed 00:07:45.904 Test: test_nvme_ns_cmd_flush ...passed 00:07:45.904 Test: test_nvme_ns_cmd_dataset_management ...passed 00:07:45.904 Test: test_nvme_ns_cmd_copy ...passed 00:07:45.904 Test: test_io_flags ...[2024-07-13 11:19:20.568596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:07:45.904 passed 00:07:45.904 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:07:45.904 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:07:45.904 Test: test_nvme_ns_cmd_reservation_register ...passed 00:07:45.904 Test: test_nvme_ns_cmd_reservation_release ...passed 00:07:45.904 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:07:45.904 Test: test_nvme_ns_cmd_reservation_report ...passed 00:07:45.904 Test: test_cmd_child_request ...passed 00:07:45.904 Test: test_nvme_ns_cmd_readv ...passed 00:07:45.904 Test: test_nvme_ns_cmd_read_with_md ...passed 00:07:45.904 Test: test_nvme_ns_cmd_writev ...[2024-07-13 11:19:20.569787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:07:45.904 passed 00:07:45.904 Test: test_nvme_ns_cmd_write_with_md ...passed 00:07:45.904 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:07:45.904 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:07:45.904 Test: test_nvme_ns_cmd_comparev ...passed 00:07:45.904 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:07:45.904 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:07:45.904 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:07:45.904 Test: test_nvme_ns_cmd_setup_request ...passed 00:07:45.904 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:07:45.904 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-13 11:19:20.571626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:45.904 passed 00:07:45.904 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:07:45.904 Test: test_nvme_ns_cmd_verify ...[2024-07-13 11:19:20.571740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:07:45.904 passed 00:07:45.904 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:07:45.904 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:07:45.904 00:07:45.904 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.904 suites 1 1 n/a 0 0 00:07:45.904 tests 32 32 32 0 0 00:07:45.904 asserts 550 550 550 0 n/a 00:07:45.904 00:07:45.904 Elapsed time = 0.004 seconds 00:07:45.904 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:07:45.904 00:07:45.904 00:07:45.904 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.904 http://cunit.sourceforge.net/ 00:07:45.904 00:07:45.904 00:07:45.904 Suite: nvme_ns_cmd 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:07:45.904 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:07:45.904 00:07:45.904 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.904 suites 1 1 n/a 0 0 00:07:45.904 tests 12 12 12 0 0 00:07:45.904 asserts 123 123 123 0 n/a 00:07:45.904 00:07:45.904 Elapsed time = 0.001 seconds 00:07:45.904 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:07:45.904 00:07:45.904 00:07:45.904 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.904 http://cunit.sourceforge.net/ 00:07:45.904 00:07:45.904 00:07:45.904 Suite: nvme_qpair 00:07:45.904 Test: test3 ...passed 00:07:45.904 Test: test_ctrlr_failed ...passed 00:07:45.904 Test: struct_packing ...passed 00:07:45.904 Test: test_nvme_qpair_process_completions ...[2024-07-13 11:19:20.630070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:45.904 [2024-07-13 11:19:20.630406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:45.904 [2024-07-13 11:19:20.630488] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:07:45.904 [2024-07-13 11:19:20.630575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:07:45.904 passed 00:07:45.904 Test: test_nvme_completion_is_retry ...passed 00:07:45.904 Test: test_get_status_string ...passed 00:07:45.904 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:07:45.904 Test: test_nvme_qpair_submit_request ...passed 00:07:45.904 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:07:45.904 Test: test_nvme_qpair_manual_complete_request ...passed 00:07:45.904 Test: test_nvme_qpair_init_deinit ...[2024-07-13 11:19:20.631092] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:07:45.904 passed 00:07:45.904 Test: test_nvme_get_sgl_print_info ...passed 00:07:45.904 00:07:45.904 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.904 suites 1 1 n/a 0 0 00:07:45.904 tests 12 12 12 0 0 00:07:45.904 asserts 154 154 154 0 n/a 00:07:45.904 00:07:45.904 Elapsed time = 0.001 seconds 00:07:46.163 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:07:46.163 00:07:46.163 00:07:46.163 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.163 http://cunit.sourceforge.net/ 00:07:46.163 00:07:46.163 00:07:46.163 Suite: nvme_pcie 00:07:46.163 Test: test_prp_list_append ...[2024-07-13 11:19:20.664510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:46.163 [2024-07-13 11:19:20.664787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:07:46.163 [2024-07-13 11:19:20.664827] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:07:46.163 [2024-07-13 11:19:20.665068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:46.163 [2024-07-13 11:19:20.665167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:07:46.163 passed 00:07:46.163 Test: test_nvme_pcie_hotplug_monitor ...passed 00:07:46.163 Test: test_shadow_doorbell_update ...passed 00:07:46.163 Test: test_build_contig_hw_sgl_request ...passed 00:07:46.163 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:07:46.163 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:07:46.164 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:07:46.164 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:07:46.164 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:07:46.164 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:07:46.164 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-13 11:19:20.665367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:07:46.164 [2024-07-13 11:19:20.665453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:07:46.164 passed 00:07:46.164 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:07:46.164 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-13 11:19:20.665523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:07:46.164 [2024-07-13 11:19:20.665561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:07:46.164 passed 00:07:46.164 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:07:46.164 00:07:46.164 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.164 suites 1 1 n/a 0 0 00:07:46.164 tests 14 14 14 0 0 00:07:46.164 asserts 235 235 235 0 n/a 00:07:46.164 00:07:46.164 Elapsed time = 0.001 seconds[2024-07-13 11:19:20.665598] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:07:46.164 00:07:46.164 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:07:46.164 00:07:46.164 00:07:46.164 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.164 http://cunit.sourceforge.net/ 00:07:46.164 00:07:46.164 00:07:46.164 Suite: nvme_ns_cmd 00:07:46.164 Test: nvme_poll_group_create_test ...passed 00:07:46.164 Test: nvme_poll_group_add_remove_test ...passed 00:07:46.164 Test: nvme_poll_group_process_completions ...passed 00:07:46.164 Test: nvme_poll_group_destroy_test ...passed 00:07:46.164 Test: nvme_poll_group_get_free_stats ...passed 00:07:46.164 00:07:46.164 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.164 suites 1 1 n/a 0 0 00:07:46.164 tests 5 5 5 0 0 00:07:46.164 asserts 75 75 75 0 n/a 00:07:46.164 00:07:46.164 Elapsed time = 0.000 seconds 00:07:46.164 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:07:46.164 00:07:46.164 00:07:46.164 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.164 http://cunit.sourceforge.net/ 00:07:46.164 00:07:46.164 00:07:46.164 Suite: nvme_quirks 00:07:46.164 Test: test_nvme_quirks_striping ...passed 00:07:46.164 00:07:46.164 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.164 suites 1 1 n/a 0 0 00:07:46.164 tests 1 1 1 0 0 00:07:46.164 asserts 5 5 5 0 n/a 00:07:46.164 00:07:46.164 Elapsed time = 0.000 seconds 00:07:46.164 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:07:46.164 00:07:46.164 00:07:46.164 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.164 http://cunit.sourceforge.net/ 00:07:46.164 00:07:46.164 00:07:46.164 Suite: nvme_tcp 00:07:46.164 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:07:46.164 Test: test_nvme_tcp_build_iovs ...passed 00:07:46.164 Test: test_nvme_tcp_build_sgl_request ...[2024-07-13 11:19:20.762291] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffc6fb1f2f0, and the iovcnt=16, remaining_size=28672 00:07:46.164 passed 00:07:46.164 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:07:46.164 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:07:46.164 Test: test_nvme_tcp_req_complete_safe ...passed 00:07:46.164 Test: test_nvme_tcp_req_get ...passed 00:07:46.164 Test: test_nvme_tcp_req_init ...passed 00:07:46.164 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:07:46.164 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:07:46.164 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:07:46.164 Test: test_nvme_tcp_alloc_reqs ...[2024-07-13 11:19:20.762942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb21030 is same with the state(6) to be set 00:07:46.164 passed 00:07:46.164 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:07:46.164 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-13 11:19:20.763301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb201e0 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.763362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffc6fb20d70 00:07:46.164 [2024-07-13 11:19:20.763407] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:07:46.164 [2024-07-13 11:19:20.763479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb206a0 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.763534] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:07:46.164 [2024-07-13 11:19:20.763613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb206a0 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.763647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:07:46.164 [2024-07-13 11:19:20.763670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb206a0 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.763702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb206a0 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.763733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb206a0 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.763779] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb206a0 is same with the state(5) to be set 00:07:46.164 passed 00:07:46.164 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-13 11:19:20.763814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb206a0 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.763854] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb206a0 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.764025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:07:46.164 [2024-07-13 11:19:20.764067] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:46.164 passed 00:07:46.164 Test: test_nvme_tcp_qpair_icreq_send ...[2024-07-13 11:19:20.764286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:07:46.164 passed 00:07:46.164 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:07:46.164 Test: test_nvme_tcp_icresp_handle ...[2024-07-13 11:19:20.764407] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffc6fb208b0): PDU Sequence Error 00:07:46.164 [2024-07-13 11:19:20.764481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:07:46.164 [2024-07-13 11:19:20.764514] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:07:46.164 [2024-07-13 11:19:20.764547] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb201f0 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.764578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:07:46.164 [2024-07-13 11:19:20.764610] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb201f0 is same with the state(5) to be set 00:07:46.164 passed 00:07:46.164 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:07:46.164 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-13 11:19:20.764656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb201f0 is same with the state(0) to be set 00:07:46.164 [2024-07-13 11:19:20.764715] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffc6fb20d70): PDU Sequence Error 00:07:46.164 [2024-07-13 11:19:20.764801] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffc6fb1f4b0 00:07:46.164 passed 00:07:46.164 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:07:46.164 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-13 11:19:20.764985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffc6fb1eb30, errno=0, rc=0 00:07:46.164 [2024-07-13 11:19:20.765048] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb1eb30 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.765114] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc6fb1eb30 is same with the state(5) to be set 00:07:46.164 [2024-07-13 11:19:20.765167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffc6fb1eb30 (0): Success 00:07:46.164 [2024-07-13 11:19:20.765211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffc6fb1eb30 (0): Success 00:07:46.164 passed 00:07:46.164 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-13 11:19:20.879168] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:46.164 [2024-07-13 11:19:20.879263] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:46.164 passed 00:07:46.164 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:07:46.164 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:07:46.164 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-13 11:19:20.879550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:46.164 [2024-07-13 11:19:20.879584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:46.164 [2024-07-13 11:19:20.879808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:46.165 [2024-07-13 11:19:20.879846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:46.165 [2024-07-13 11:19:20.879941] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:07:46.165 passed 00:07:46.165 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-13 11:19:20.880004] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:46.165 [2024-07-13 11:19:20.880117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:07:46.165 [2024-07-13 11:19:20.880182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:07:46.165 [2024-07-13 11:19:20.880323] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x614000000c40, and the iovcnt=1, remaining_size=1024 00:07:46.165 [2024-07-13 11:19:20.880370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:07:46.165 passed 00:07:46.165 00:07:46.165 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.165 suites 1 1 n/a 0 0 00:07:46.165 tests 27 27 27 0 0 00:07:46.165 asserts 624 624 624 0 n/a 00:07:46.165 00:07:46.165 Elapsed time = 0.118 seconds 00:07:46.423 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:07:46.423 00:07:46.423 00:07:46.423 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.423 http://cunit.sourceforge.net/ 00:07:46.423 00:07:46.423 00:07:46.423 Suite: nvme_transport 00:07:46.423 Test: test_nvme_get_transport ...passed 00:07:46.423 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:07:46.423 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:07:46.423 Test: test_nvme_transport_poll_group_add_remove ...passed 00:07:46.423 Test: test_ctrlr_get_memory_domains ...passed 00:07:46.423 00:07:46.423 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.423 suites 1 1 n/a 0 0 00:07:46.423 tests 5 5 5 0 0 00:07:46.423 asserts 28 28 28 0 n/a 00:07:46.423 00:07:46.423 Elapsed time = 0.000 seconds 00:07:46.423 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:07:46.423 00:07:46.423 00:07:46.423 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.423 http://cunit.sourceforge.net/ 00:07:46.423 00:07:46.423 00:07:46.423 Suite: nvme_io_msg 00:07:46.423 Test: test_nvme_io_msg_send ...passed 00:07:46.423 Test: test_nvme_io_msg_process ...passed 00:07:46.423 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:07:46.423 00:07:46.423 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.423 suites 1 1 n/a 0 0 00:07:46.423 tests 3 3 3 0 0 00:07:46.423 asserts 56 56 56 0 n/a 00:07:46.423 00:07:46.423 Elapsed time = 0.000 seconds 00:07:46.423 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:07:46.423 00:07:46.423 00:07:46.423 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.423 http://cunit.sourceforge.net/ 00:07:46.423 00:07:46.423 00:07:46.423 Suite: nvme_pcie_common 00:07:46.423 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-13 11:19:20.979058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:07:46.423 passed 00:07:46.423 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:07:46.423 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:07:46.423 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-13 11:19:20.979818] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:07:46.423 [2024-07-13 11:19:20.979939] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:07:46.423 passed 00:07:46.423 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-13 11:19:20.979982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:07:46.423 passed 00:07:46.423 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-13 11:19:20.980362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:46.423 [2024-07-13 11:19:20.980402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:46.423 passed 00:07:46.423 00:07:46.423 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.423 suites 1 1 n/a 0 0 00:07:46.423 tests 6 6 6 0 0 00:07:46.423 asserts 148 148 148 0 n/a 00:07:46.423 00:07:46.423 Elapsed time = 0.001 seconds 00:07:46.423 11:19:20 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:07:46.423 00:07:46.423 00:07:46.423 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.423 http://cunit.sourceforge.net/ 00:07:46.423 00:07:46.423 00:07:46.423 Suite: nvme_fabric 00:07:46.423 Test: test_nvme_fabric_prop_set_cmd ...passed 00:07:46.423 Test: test_nvme_fabric_prop_get_cmd ...passed 00:07:46.423 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:07:46.423 Test: test_nvme_fabric_discover_probe ...passed 00:07:46.423 Test: test_nvme_fabric_qpair_connect ...[2024-07-13 11:19:21.014860] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:07:46.423 passed 00:07:46.423 00:07:46.423 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.423 suites 1 1 n/a 0 0 00:07:46.423 tests 5 5 5 0 0 00:07:46.423 asserts 60 60 60 0 n/a 00:07:46.423 00:07:46.423 Elapsed time = 0.001 seconds 00:07:46.423 11:19:21 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:07:46.423 00:07:46.423 00:07:46.424 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.424 http://cunit.sourceforge.net/ 00:07:46.424 00:07:46.424 00:07:46.424 Suite: nvme_opal 00:07:46.424 Test: test_opal_nvme_security_recv_send_done ...passed 00:07:46.424 Test: test_opal_add_short_atom_header ...[2024-07-13 11:19:21.047781] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:07:46.424 passed 00:07:46.424 00:07:46.424 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.424 suites 1 1 n/a 0 0 00:07:46.424 tests 2 2 2 0 0 00:07:46.424 asserts 22 22 22 0 n/a 00:07:46.424 00:07:46.424 Elapsed time = 0.000 seconds 00:07:46.424 00:07:46.424 real 0m1.254s 00:07:46.424 user 0m0.671s 00:07:46.424 sys 0m0.434s 00:07:46.424 11:19:21 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.424 ************************************ 00:07:46.424 END TEST unittest_nvme 00:07:46.424 ************************************ 00:07:46.424 11:19:21 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.424 11:19:21 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:46.424 11:19:21 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:46.424 11:19:21 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.424 11:19:21 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.424 11:19:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:46.424 ************************************ 00:07:46.424 START TEST unittest_log 00:07:46.424 ************************************ 00:07:46.424 11:19:21 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:07:46.424 00:07:46.424 00:07:46.424 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.424 http://cunit.sourceforge.net/ 00:07:46.424 00:07:46.424 00:07:46.424 Suite: log 00:07:46.424 Test: log_test ...[2024-07-13 11:19:21.129562] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:07:46.424 [2024-07-13 11:19:21.129945] log_ut.c: 57:log_test: *DEBUG*: log test 00:07:46.424 log dump test: 00:07:46.424 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:07:46.424 spdk dump test: 00:07:46.424 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:07:46.424 spdk dump test: 00:07:46.424 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:07:46.424 00000010 65 20 63 68 61 72 73 e chars 00:07:46.424 passed 00:07:47.799 Test: deprecation ...passed 00:07:47.799 00:07:47.799 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.799 suites 1 1 n/a 0 0 00:07:47.799 tests 2 2 2 0 0 00:07:47.799 asserts 73 73 73 0 n/a 00:07:47.799 00:07:47.799 Elapsed time = 0.001 seconds 00:07:47.799 00:07:47.799 real 0m1.036s 00:07:47.799 user 0m0.026s 00:07:47.799 sys 0m0.009s 00:07:47.799 11:19:22 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.799 ************************************ 00:07:47.799 11:19:22 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:07:47.799 END TEST unittest_log 00:07:47.799 ************************************ 00:07:47.799 11:19:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:47.799 11:19:22 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:47.799 11:19:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.799 11:19:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.799 11:19:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:47.799 ************************************ 00:07:47.799 START TEST unittest_lvol 00:07:47.799 ************************************ 00:07:47.799 11:19:22 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:07:47.799 00:07:47.799 00:07:47.799 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.799 http://cunit.sourceforge.net/ 00:07:47.799 00:07:47.799 00:07:47.799 Suite: lvol 00:07:47.799 Test: lvs_init_unload_success ...[2024-07-13 11:19:22.227542] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:07:47.799 passed 00:07:47.799 Test: lvs_init_destroy_success ...[2024-07-13 11:19:22.228541] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:07:47.799 passed 00:07:47.799 Test: lvs_init_opts_success ...passed 00:07:47.799 Test: lvs_unload_lvs_is_null_fail ...[2024-07-13 11:19:22.229225] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:07:47.799 passed 00:07:47.799 Test: lvs_names ...[2024-07-13 11:19:22.229643] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:07:47.799 [2024-07-13 11:19:22.229797] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:07:47.799 [2024-07-13 11:19:22.230112] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:07:47.799 passed 00:07:47.799 Test: lvol_create_destroy_success ...passed 00:07:47.799 Test: lvol_create_fail ...[2024-07-13 11:19:22.231384] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:07:47.799 [2024-07-13 11:19:22.231687] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:07:47.799 passed 00:07:47.799 Test: lvol_destroy_fail ...[2024-07-13 11:19:22.232373] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:07:47.799 passed 00:07:47.799 Test: lvol_close ...[2024-07-13 11:19:22.232964] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:07:47.799 [2024-07-13 11:19:22.233143] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:07:47.799 passed 00:07:47.799 Test: lvol_resize ...passed 00:07:47.800 Test: lvol_set_read_only ...passed 00:07:47.800 Test: test_lvs_load ...[2024-07-13 11:19:22.234704] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:07:47.800 [2024-07-13 11:19:22.234923] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:07:47.800 passed 00:07:47.800 Test: lvols_load ...[2024-07-13 11:19:22.235494] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:47.800 [2024-07-13 11:19:22.235759] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:07:47.800 passed 00:07:47.800 Test: lvol_open ...passed 00:07:47.800 Test: lvol_snapshot ...passed 00:07:47.800 Test: lvol_snapshot_fail ...[2024-07-13 11:19:22.237244] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:07:47.800 passed 00:07:47.800 Test: lvol_clone ...passed 00:07:47.800 Test: lvol_clone_fail ...[2024-07-13 11:19:22.238378] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:07:47.800 passed 00:07:47.800 Test: lvol_iter_clones ...passed 00:07:47.800 Test: lvol_refcnt ...[2024-07-13 11:19:22.239590] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 4ec902a2-6121-4332-980e-9785239aefd1 because it is still open 00:07:47.800 passed 00:07:47.800 Test: lvol_names ...[2024-07-13 11:19:22.240111] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:47.800 [2024-07-13 11:19:22.240334] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:47.800 [2024-07-13 11:19:22.240715] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:07:47.800 passed 00:07:47.800 Test: lvol_create_thin_provisioned ...passed 00:07:47.800 Test: lvol_rename ...[2024-07-13 11:19:22.241737] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:47.800 [2024-07-13 11:19:22.241963] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:07:47.800 passed 00:07:47.800 Test: lvs_rename ...[2024-07-13 11:19:22.242547] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:07:47.800 passed 00:07:47.800 Test: lvol_inflate ...[2024-07-13 11:19:22.243150] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:47.800 passed 00:07:47.800 Test: lvol_decouple_parent ...[2024-07-13 11:19:22.243726] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:07:47.800 passed 00:07:47.800 Test: lvol_get_xattr ...passed 00:07:47.800 Test: lvol_esnap_reload ...passed 00:07:47.800 Test: lvol_esnap_create_bad_args ...[2024-07-13 11:19:22.244943] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:07:47.800 [2024-07-13 11:19:22.245092] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:07:47.800 [2024-07-13 11:19:22.245230] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:07:47.800 [2024-07-13 11:19:22.245471] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:07:47.800 [2024-07-13 11:19:22.245758] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:07:47.800 passed 00:07:47.800 Test: lvol_esnap_create_delete ...passed 00:07:47.800 Test: lvol_esnap_load_esnaps ...[2024-07-13 11:19:22.246611] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:07:47.800 passed 00:07:47.800 Test: lvol_esnap_missing ...[2024-07-13 11:19:22.247144] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:47.800 [2024-07-13 11:19:22.247309] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:07:47.800 passed 00:07:47.800 Test: lvol_esnap_hotplug ... 00:07:47.800 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:07:47.800 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:07:47.800 [2024-07-13 11:19:22.248765] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol eae169cf-38d3-4ed6-ba3e-7019cdc80cf8: failed to create esnap bs_dev: error -12 00:07:47.800 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:07:47.800 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:07:47.800 [2024-07-13 11:19:22.249341] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 4970d08b-b71e-44ac-b0d4-669f4ec0423a: failed to create esnap bs_dev: error -12 00:07:47.800 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:07:47.800 [2024-07-13 11:19:22.249702] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol cb97a294-82cd-4b2d-8c18-281e79b1ec82: failed to create esnap bs_dev: error -12 00:07:47.800 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:07:47.800 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:07:47.800 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:07:47.800 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:07:47.800 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:07:47.800 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:07:47.800 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:07:47.800 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:07:47.800 passed 00:07:47.800 Test: lvol_get_by ...passed 00:07:47.800 Test: lvol_shallow_copy ...[2024-07-13 11:19:22.252532] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:07:47.800 [2024-07-13 11:19:22.252704] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol ea166707-81f6-463b-92c1-da5246d9359a shallow copy, ext_dev must not be NULL 00:07:47.800 passed 00:07:47.800 Test: lvol_set_parent ...[2024-07-13 11:19:22.253248] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:07:47.800 [2024-07-13 11:19:22.253429] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:07:47.800 passed 00:07:47.800 Test: lvol_set_external_parent ...[2024-07-13 11:19:22.254018] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:07:47.800 [2024-07-13 11:19:22.254164] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:07:47.800 [2024-07-13 11:19:22.254343] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:07:47.800 passed 00:07:47.800 00:07:47.800 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.800 suites 1 1 n/a 0 0 00:07:47.800 tests 37 37 37 0 0 00:07:47.800 asserts 1505 1505 1505 0 n/a 00:07:47.800 00:07:47.800 Elapsed time = 0.017 seconds 00:07:47.800 00:07:47.800 real 0m0.068s 00:07:47.800 user 0m0.026s 00:07:47.800 sys 0m0.031s 00:07:47.800 11:19:22 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.800 ************************************ 00:07:47.800 END TEST unittest_lvol 00:07:47.800 ************************************ 00:07:47.800 11:19:22 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.800 11:19:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:47.800 11:19:22 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:47.800 11:19:22 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:47.800 11:19:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.800 11:19:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.800 11:19:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:47.800 ************************************ 00:07:47.800 START TEST unittest_nvme_rdma 00:07:47.800 ************************************ 00:07:47.801 11:19:22 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:07:47.801 00:07:47.801 00:07:47.801 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.801 http://cunit.sourceforge.net/ 00:07:47.801 00:07:47.801 00:07:47.801 Suite: nvme_rdma 00:07:47.801 Test: test_nvme_rdma_build_sgl_request ...[2024-07-13 11:19:22.343255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:07:47.801 [2024-07-13 11:19:22.343639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:47.801 passed 00:07:47.801 Test: test_nvme_rdma_build_sgl_inline_request ...[2024-07-13 11:19:22.343739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:07:47.801 passed 00:07:47.801 Test: test_nvme_rdma_build_contig_request ...passed 00:07:47.801 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:07:47.801 Test: test_nvme_rdma_create_reqs ...[2024-07-13 11:19:22.343810] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:07:47.801 [2024-07-13 11:19:22.343927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:07:47.801 passed 00:07:47.801 Test: test_nvme_rdma_create_rsps ...[2024-07-13 11:19:22.344265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:07:47.801 passed 00:07:47.801 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-13 11:19:22.344417] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:07:47.801 passed 00:07:47.801 Test: test_nvme_rdma_poller_create ...[2024-07-13 11:19:22.344461] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:07:47.801 passed 00:07:47.801 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:07:47.801 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-13 11:19:22.344594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:07:47.801 passed 00:07:47.801 Test: test_nvme_rdma_req_put_and_get ...passed 00:07:47.801 Test: test_nvme_rdma_req_init ...passed 00:07:47.801 Test: test_nvme_rdma_validate_cm_event ...passed 00:07:47.801 Test: test_nvme_rdma_qpair_init ...passed 00:07:47.801 Test: test_nvme_rdma_qpair_submit_request ...[2024-07-13 11:19:22.344868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:07:47.801 [2024-07-13 11:19:22.344915] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:07:47.801 passed 00:07:47.801 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:07:47.801 Test: test_rdma_get_memory_translation ...passed 00:07:47.801 Test: test_get_rdma_qpair_from_wc ...passed 00:07:47.801 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:07:47.801 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-13 11:19:22.345038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:07:47.801 [2024-07-13 11:19:22.345079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:07:47.801 [2024-07-13 11:19:22.345164] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:47.801 [2024-07-13 11:19:22.345186] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:07:47.801 passed 00:07:47.801 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-13 11:19:22.345327] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:47.801 [2024-07-13 11:19:22.345357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:07:47.801 [2024-07-13 11:19:22.345382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fffa722ea20 on poll group 0x60c000000040 00:07:47.801 [2024-07-13 11:19:22.345405] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:07:47.801 [2024-07-13 11:19:22.345445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:07:47.801 [2024-07-13 11:19:22.345468] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fffa722ea20 on poll group 0x60c000000040 00:07:47.801 passed 00:07:47.801 00:07:47.801 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.801 suites 1 1 n/a 0 0 00:07:47.801 tests 21 21 21 0 0 00:07:47.801 asserts 397 397 397 0 n/a 00:07:47.801 00:07:47.801 Elapsed time = 0.002 seconds 00:07:47.801 [2024-07-13 11:19:22.345521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:47.801 00:07:47.801 real 0m0.034s 00:07:47.801 user 0m0.015s 00:07:47.801 sys 0m0.019s 00:07:47.801 11:19:22 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.801 ************************************ 00:07:47.801 END TEST unittest_nvme_rdma 00:07:47.801 ************************************ 00:07:47.801 11:19:22 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:47.801 11:19:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:47.801 11:19:22 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:47.801 11:19:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.801 11:19:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.801 11:19:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:47.801 ************************************ 00:07:47.801 START TEST unittest_nvmf_transport 00:07:47.801 ************************************ 00:07:47.801 11:19:22 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:07:47.801 00:07:47.801 00:07:47.801 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.801 http://cunit.sourceforge.net/ 00:07:47.801 00:07:47.801 00:07:47.801 Suite: nvmf 00:07:47.801 Test: test_spdk_nvmf_transport_create ...[2024-07-13 11:19:22.436366] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:07:47.801 [2024-07-13 11:19:22.436886] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:07:47.801 [2024-07-13 11:19:22.437095] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:07:47.801 [2024-07-13 11:19:22.437365] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:07:47.801 passed 00:07:47.801 Test: test_nvmf_transport_poll_group_create ...passed 00:07:47.801 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-13 11:19:22.438116] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:07:47.801 [2024-07-13 11:19:22.438328] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:07:47.801 [2024-07-13 11:19:22.438456] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:07:47.801 passed 00:07:47.801 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:07:47.801 00:07:47.801 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.801 suites 1 1 n/a 0 0 00:07:47.801 tests 4 4 4 0 0 00:07:47.801 asserts 49 49 49 0 n/a 00:07:47.801 00:07:47.801 Elapsed time = 0.002 seconds 00:07:47.801 00:07:47.802 real 0m0.039s 00:07:47.802 user 0m0.015s 00:07:47.802 sys 0m0.023s 00:07:47.802 11:19:22 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.802 11:19:22 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:07:47.802 ************************************ 00:07:47.802 END TEST unittest_nvmf_transport 00:07:47.802 ************************************ 00:07:47.802 11:19:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:47.802 11:19:22 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:47.802 11:19:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.802 11:19:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.802 11:19:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:47.802 ************************************ 00:07:47.802 START TEST unittest_rdma 00:07:47.802 ************************************ 00:07:47.802 11:19:22 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:07:47.802 00:07:47.802 00:07:47.802 CUnit - A unit testing framework for C - Version 2.1-3 00:07:47.802 http://cunit.sourceforge.net/ 00:07:47.802 00:07:47.802 00:07:47.802 Suite: rdma_common 00:07:47.802 Test: test_spdk_rdma_pd ...[2024-07-13 11:19:22.517373] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:07:47.802 [2024-07-13 11:19:22.517869] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:07:47.802 passed 00:07:47.802 00:07:47.802 Run Summary: Type Total Ran Passed Failed Inactive 00:07:47.802 suites 1 1 n/a 0 0 00:07:47.802 tests 1 1 1 0 0 00:07:47.802 asserts 31 31 31 0 n/a 00:07:47.802 00:07:47.802 Elapsed time = 0.001 seconds 00:07:47.802 00:07:47.802 real 0m0.028s 00:07:47.802 user 0m0.018s 00:07:47.802 sys 0m0.009s 00:07:47.802 11:19:22 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.802 11:19:22 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:47.802 ************************************ 00:07:47.802 END TEST unittest_rdma 00:07:47.802 ************************************ 00:07:48.061 11:19:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:48.061 11:19:22 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:48.061 11:19:22 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:48.061 11:19:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.061 11:19:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.061 11:19:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:48.061 ************************************ 00:07:48.061 START TEST unittest_nvme_cuse 00:07:48.061 ************************************ 00:07:48.061 11:19:22 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:07:48.061 00:07:48.061 00:07:48.061 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.061 http://cunit.sourceforge.net/ 00:07:48.061 00:07:48.061 00:07:48.061 Suite: nvme_cuse 00:07:48.061 Test: test_cuse_nvme_submit_io_read_write ...passed 00:07:48.061 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:07:48.061 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:07:48.061 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:07:48.061 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:07:48.061 Test: test_cuse_nvme_submit_io ...[2024-07-13 11:19:22.608480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:07:48.061 passed 00:07:48.061 Test: test_cuse_nvme_reset ...[2024-07-13 11:19:22.609043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:07:48.061 passed 00:07:48.629 Test: test_nvme_cuse_stop ...passed 00:07:48.629 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:07:48.629 00:07:48.629 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.629 suites 1 1 n/a 0 0 00:07:48.629 tests 9 9 9 0 0 00:07:48.629 asserts 118 118 118 0 n/a 00:07:48.629 00:07:48.629 Elapsed time = 0.499 seconds 00:07:48.629 ************************************ 00:07:48.629 END TEST unittest_nvme_cuse 00:07:48.629 ************************************ 00:07:48.629 00:07:48.629 real 0m0.540s 00:07:48.629 user 0m0.272s 00:07:48.629 sys 0m0.261s 00:07:48.629 11:19:23 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.629 11:19:23 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:07:48.629 11:19:23 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:48.629 11:19:23 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:07:48.629 11:19:23 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.629 11:19:23 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.629 11:19:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:48.629 ************************************ 00:07:48.629 START TEST unittest_nvmf 00:07:48.629 ************************************ 00:07:48.629 11:19:23 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:07:48.629 11:19:23 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:07:48.629 00:07:48.629 00:07:48.629 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.629 http://cunit.sourceforge.net/ 00:07:48.629 00:07:48.629 00:07:48.629 Suite: nvmf 00:07:48.629 Test: test_get_log_page ...[2024-07-13 11:19:23.196437] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:07:48.629 passed 00:07:48.629 Test: test_process_fabrics_cmd ...[2024-07-13 11:19:23.197273] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:07:48.629 passed 00:07:48.629 Test: test_connect ...[2024-07-13 11:19:23.198446] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:07:48.629 [2024-07-13 11:19:23.198733] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:07:48.629 [2024-07-13 11:19:23.198967] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:07:48.629 [2024-07-13 11:19:23.199207] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:07:48.629 [2024-07-13 11:19:23.199471] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:07:48.629 [2024-07-13 11:19:23.199710] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:07:48.630 [2024-07-13 11:19:23.199906] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:07:48.630 [2024-07-13 11:19:23.200101] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:07:48.630 [2024-07-13 11:19:23.200421] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:07:48.630 [2024-07-13 11:19:23.200716] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:07:48.630 [2024-07-13 11:19:23.201208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:07:48.630 [2024-07-13 11:19:23.201485] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:07:48.630 [2024-07-13 11:19:23.201741] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:07:48.630 [2024-07-13 11:19:23.202001] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:07:48.630 [2024-07-13 11:19:23.202286] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:07:48.630 [2024-07-13 11:19:23.202617] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:07:48.630 [2024-07-13 11:19:23.202913] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:07:48.630 passed 00:07:48.630 Test: test_get_ns_id_desc_list ...passed 00:07:48.630 Test: test_identify_ns ...[2024-07-13 11:19:23.203921] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:48.630 [2024-07-13 11:19:23.204417] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:07:48.630 [2024-07-13 11:19:23.204698] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:07:48.630 passed 00:07:48.630 Test: test_identify_ns_iocs_specific ...[2024-07-13 11:19:23.205266] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:48.630 [2024-07-13 11:19:23.205716] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:07:48.630 passed 00:07:48.630 Test: test_reservation_write_exclusive ...passed 00:07:48.630 Test: test_reservation_exclusive_access ...passed 00:07:48.630 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:07:48.630 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:07:48.630 Test: test_reservation_notification_log_page ...passed 00:07:48.630 Test: test_get_dif_ctx ...passed 00:07:48.630 Test: test_set_get_features ...[2024-07-13 11:19:23.208303] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:48.630 [2024-07-13 11:19:23.208548] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:07:48.630 [2024-07-13 11:19:23.208747] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:07:48.630 [2024-07-13 11:19:23.208940] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:07:48.630 passed 00:07:48.630 Test: test_identify_ctrlr ...passed 00:07:48.630 Test: test_identify_ctrlr_iocs_specific ...passed 00:07:48.630 Test: test_custom_admin_cmd ...passed 00:07:48.630 Test: test_fused_compare_and_write ...[2024-07-13 11:19:23.210486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:07:48.630 [2024-07-13 11:19:23.210708] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:48.630 [2024-07-13 11:19:23.210912] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:07:48.630 passed 00:07:48.630 Test: test_multi_async_event_reqs ...passed 00:07:48.630 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:07:48.630 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:07:48.630 Test: test_multi_async_events ...passed 00:07:48.630 Test: test_rae ...passed 00:07:48.630 Test: test_nvmf_ctrlr_create_destruct ...passed 00:07:48.630 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:07:48.630 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-13 11:19:23.213777] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:07:48.630 [2024-07-13 11:19:23.213992] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:07:48.630 passed 00:07:48.630 Test: test_zcopy_read ...passed 00:07:48.630 Test: test_zcopy_write ...passed 00:07:48.630 Test: test_nvmf_property_set ...passed 00:07:48.630 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-13 11:19:23.215414] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:48.630 [2024-07-13 11:19:23.215623] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:07:48.630 passed 00:07:48.630 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-13 11:19:23.216045] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:07:48.630 [2024-07-13 11:19:23.216230] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:07:48.630 [2024-07-13 11:19:23.216450] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:07:48.630 passed 00:07:48.630 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:07:48.630 Test: test_nvmf_check_qpair_active ...[2024-07-13 11:19:23.217197] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:07:48.630 [2024-07-13 11:19:23.217427] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4744:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:07:48.630 [2024-07-13 11:19:23.217637] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:07:48.630 [2024-07-13 11:19:23.217839] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:07:48.630 [2024-07-13 11:19:23.218024] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:07:48.630 passed 00:07:48.630 00:07:48.630 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.630 suites 1 1 n/a 0 0 00:07:48.630 tests 32 32 32 0 0 00:07:48.630 asserts 977 977 977 0 n/a 00:07:48.630 00:07:48.630 Elapsed time = 0.011 seconds 00:07:48.630 11:19:23 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:07:48.630 00:07:48.630 00:07:48.630 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.630 http://cunit.sourceforge.net/ 00:07:48.630 00:07:48.630 00:07:48.630 Suite: nvmf 00:07:48.630 Test: test_get_rw_params ...passed 00:07:48.630 Test: test_get_rw_ext_params ...passed 00:07:48.630 Test: test_lba_in_range ...passed 00:07:48.630 Test: test_get_dif_ctx ...passed 00:07:48.630 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:07:48.630 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-13 11:19:23.256489] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:07:48.630 [2024-07-13 11:19:23.256888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:07:48.630 [2024-07-13 11:19:23.257063] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:07:48.630 passed 00:07:48.630 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-13 11:19:23.257369] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:07:48.630 [2024-07-13 11:19:23.257479] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:07:48.630 passed 00:07:48.630 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-13 11:19:23.257880] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:07:48.630 [2024-07-13 11:19:23.258024] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:07:48.630 [2024-07-13 11:19:23.258219] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:07:48.630 [2024-07-13 11:19:23.258355] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:07:48.630 passed 00:07:48.630 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:07:48.630 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:07:48.630 00:07:48.630 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.630 suites 1 1 n/a 0 0 00:07:48.630 tests 10 10 10 0 0 00:07:48.631 asserts 159 159 159 0 n/a 00:07:48.631 00:07:48.631 Elapsed time = 0.001 seconds 00:07:48.631 11:19:23 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:07:48.631 00:07:48.631 00:07:48.631 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.631 http://cunit.sourceforge.net/ 00:07:48.631 00:07:48.631 00:07:48.631 Suite: nvmf 00:07:48.631 Test: test_discovery_log ...passed 00:07:48.631 Test: test_discovery_log_with_filters ...passed 00:07:48.631 00:07:48.631 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.631 suites 1 1 n/a 0 0 00:07:48.631 tests 2 2 2 0 0 00:07:48.631 asserts 238 238 238 0 n/a 00:07:48.631 00:07:48.631 Elapsed time = 0.002 seconds 00:07:48.631 11:19:23 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:07:48.631 00:07:48.631 00:07:48.631 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.631 http://cunit.sourceforge.net/ 00:07:48.631 00:07:48.631 00:07:48.631 Suite: nvmf 00:07:48.631 Test: nvmf_test_create_subsystem ...[2024-07-13 11:19:23.332481] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:07:48.631 [2024-07-13 11:19:23.332828] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:07:48.631 [2024-07-13 11:19:23.333019] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:07:48.631 [2024-07-13 11:19:23.333203] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:07:48.631 [2024-07-13 11:19:23.333344] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:07:48.631 [2024-07-13 11:19:23.333511] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:07:48.631 [2024-07-13 11:19:23.333711] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:07:48.631 [2024-07-13 11:19:23.333878] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:07:48.631 [2024-07-13 11:19:23.334019] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:07:48.631 [2024-07-13 11:19:23.334154] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:07:48.631 [2024-07-13 11:19:23.334215] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:07:48.631 [2024-07-13 11:19:23.334415] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:07:48.631 [2024-07-13 11:19:23.334642] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:07:48.631 [2024-07-13 11:19:23.334829] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:07:48.631 [2024-07-13 11:19:23.335098] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:07:48.631 [2024-07-13 11:19:23.335255] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:07:48.631 [2024-07-13 11:19:23.335444] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:07:48.631 [2024-07-13 11:19:23.335587] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:07:48.631 [2024-07-13 11:19:23.335718] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:48.631 [2024-07-13 11:19:23.335876] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:07:48.631 [2024-07-13 11:19:23.336040] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:07:48.631 [2024-07-13 11:19:23.336177] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:07:48.631 passed 00:07:48.631 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-13 11:19:23.336666] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:07:48.631 [2024-07-13 11:19:23.336814] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2027:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:07:48.631 passed 00:07:48.631 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-13 11:19:23.337148] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:07:48.631 passed 00:07:48.631 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:07:48.631 Test: test_spdk_nvmf_ns_visible ...[2024-07-13 11:19:23.337731] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:07:48.631 passed 00:07:48.631 Test: test_reservation_register ...[2024-07-13 11:19:23.338483] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:48.631 [2024-07-13 11:19:23.338718] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3160:nvmf_ns_reservation_register: *ERROR*: No registrant 00:07:48.631 passed 00:07:48.631 Test: test_reservation_register_with_ptpl ...passed 00:07:48.631 Test: test_reservation_acquire_preempt_1 ...[2024-07-13 11:19:23.340221] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:48.631 passed 00:07:48.631 Test: test_reservation_acquire_release_with_ptpl ...passed 00:07:48.631 Test: test_reservation_release ...[2024-07-13 11:19:23.342372] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:48.631 passed 00:07:48.631 Test: test_reservation_unregister_notification ...[2024-07-13 11:19:23.342936] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:48.631 passed 00:07:48.631 Test: test_reservation_release_notification ...[2024-07-13 11:19:23.343478] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:48.631 passed 00:07:48.631 Test: test_reservation_release_notification_write_exclusive ...[2024-07-13 11:19:23.344003] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:48.631 passed 00:07:48.631 Test: test_reservation_clear_notification ...[2024-07-13 11:19:23.344511] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:48.631 passed 00:07:48.631 Test: test_reservation_preempt_notification ...[2024-07-13 11:19:23.345030] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:07:48.631 passed 00:07:48.631 Test: test_spdk_nvmf_ns_event ...passed 00:07:48.631 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:07:48.631 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:07:48.631 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-13 11:19:23.346646] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:07:48.631 [2024-07-13 11:19:23.346844] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:07:48.631 passed 00:07:48.632 Test: test_nvmf_ns_reservation_report ...[2024-07-13 11:19:23.347305] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3465:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:07:48.632 passed 00:07:48.632 Test: test_nvmf_nqn_is_valid ...[2024-07-13 11:19:23.347666] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:07:48.632 [2024-07-13 11:19:23.347825] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:67851b8d-97d8-42d1-a5d9-f8a26ecc850": uuid is not the correct length 00:07:48.632 [2024-07-13 11:19:23.347975] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:07:48.632 passed 00:07:48.632 Test: test_nvmf_ns_reservation_restore ...[2024-07-13 11:19:23.348328] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2659:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:07:48.632 passed 00:07:48.632 Test: test_nvmf_subsystem_state_change ...passed 00:07:48.632 Test: test_nvmf_reservation_custom_ops ...passed 00:07:48.632 00:07:48.632 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.632 suites 1 1 n/a 0 0 00:07:48.632 tests 24 24 24 0 0 00:07:48.632 asserts 499 499 499 0 n/a 00:07:48.632 00:07:48.632 Elapsed time = 0.010 seconds 00:07:48.632 11:19:23 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:07:48.891 00:07:48.891 00:07:48.891 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.891 http://cunit.sourceforge.net/ 00:07:48.891 00:07:48.891 00:07:48.891 Suite: nvmf 00:07:48.891 Test: test_nvmf_tcp_create ...[2024-07-13 11:19:23.393856] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:07:48.891 passed 00:07:48.891 Test: test_nvmf_tcp_destroy ...passed 00:07:48.891 Test: test_nvmf_tcp_poll_group_create ...passed 00:07:48.891 Test: test_nvmf_tcp_send_c2h_data ...passed 00:07:48.891 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:07:48.891 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:07:48.891 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:07:48.891 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-13 11:19:23.469294] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.469422] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea45e50 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.469661] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea45e50 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.469791] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.469839] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea45e50 is same with the state(5) to be set 00:07:48.891 passed 00:07:48.891 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:07:48.891 Test: test_nvmf_tcp_icreq_handle ...[2024-07-13 11:19:23.470100] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:48.891 [2024-07-13 11:19:23.470293] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.470423] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea45e50 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.470528] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:07:48.891 [2024-07-13 11:19:23.470580] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea45e50 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.470647] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.470729] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea45e50 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.470828] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.470921] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea45e50 is same with the state(5) to be set 00:07:48.891 passed 00:07:48.891 Test: test_nvmf_tcp_check_xfer_type ...passed 00:07:48.891 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-13 11:19:23.471418] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2517:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:07:48.891 [2024-07-13 11:19:23.471488] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.471608] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea45e50 is same with the state(5) to be set 00:07:48.891 passed 00:07:48.891 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-13 11:19:23.471784] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2249:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffeeea46bb0 00:07:48.891 [2024-07-13 11:19:23.471950] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.472081] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.472206] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2306:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffeeea46310 00:07:48.891 [2024-07-13 11:19:23.472347] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.472484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.472585] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2259:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:07:48.891 [2024-07-13 11:19:23.472640] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.472759] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.472820] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2298:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:07:48.891 [2024-07-13 11:19:23.472894] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.472994] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.473046] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.473238] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.473390] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.473511] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.473630] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.473679] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.473790] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.473835] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.474061] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.474195] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 [2024-07-13 11:19:23.474328] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:07:48.891 [2024-07-13 11:19:23.474385] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffeeea46310 is same with the state(5) to be set 00:07:48.891 passed 00:07:48.891 Test: test_nvmf_tcp_tls_add_remove_credentials ...passed 00:07:48.891 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-13 11:19:23.489507] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:07:48.891 [2024-07-13 11:19:23.489584] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:07:48.891 passed 00:07:48.891 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-13 11:19:23.490125] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:07:48.891 [2024-07-13 11:19:23.490256] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:07:48.891 passed 00:07:48.891 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-13 11:19:23.490642] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:07:48.891 [2024-07-13 11:19:23.490776] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:07:48.891 passed 00:07:48.891 00:07:48.891 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.891 suites 1 1 n/a 0 0 00:07:48.891 tests 17 17 17 0 0 00:07:48.891 asserts 222 222 222 0 n/a 00:07:48.891 00:07:48.891 Elapsed time = 0.107 seconds 00:07:48.891 11:19:23 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:07:48.891 00:07:48.891 00:07:48.891 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.891 http://cunit.sourceforge.net/ 00:07:48.891 00:07:48.891 00:07:48.891 Suite: nvmf 00:07:48.891 Test: test_nvmf_tgt_create_poll_group ...passed 00:07:48.891 00:07:48.891 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.891 suites 1 1 n/a 0 0 00:07:48.891 tests 1 1 1 0 0 00:07:48.891 asserts 17 17 17 0 n/a 00:07:48.891 00:07:48.891 Elapsed time = 0.023 seconds 00:07:49.151 00:07:49.151 real 0m0.468s 00:07:49.151 user 0m0.221s 00:07:49.151 sys 0m0.220s 00:07:49.151 11:19:23 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.151 11:19:23 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 ************************************ 00:07:49.151 END TEST unittest_nvmf 00:07:49.151 ************************************ 00:07:49.151 11:19:23 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:49.151 11:19:23 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:49.151 11:19:23 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:49.151 11:19:23 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:49.151 11:19:23 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.151 11:19:23 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.151 11:19:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 ************************************ 00:07:49.151 START TEST unittest_nvmf_rdma 00:07:49.151 ************************************ 00:07:49.151 11:19:23 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:07:49.151 00:07:49.151 00:07:49.151 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.151 http://cunit.sourceforge.net/ 00:07:49.151 00:07:49.151 00:07:49.151 Suite: nvmf 00:07:49.151 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-13 11:19:23.729413] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:07:49.151 [2024-07-13 11:19:23.729756] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:07:49.151 passed 00:07:49.151 Test: test_spdk_nvmf_rdma_request_process ...[2024-07-13 11:19:23.729800] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:07:49.151 passed 00:07:49.151 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:07:49.151 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:07:49.151 Test: test_nvmf_rdma_opts_init ...passed 00:07:49.151 Test: test_nvmf_rdma_request_free_data ...passed 00:07:49.151 Test: test_nvmf_rdma_resources_create ...passed 00:07:49.151 Test: test_nvmf_rdma_qpair_compare ...passed 00:07:49.151 Test: test_nvmf_rdma_resize_cq ...[2024-07-13 11:19:23.732317] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:07:49.151 Using CQ of insufficient size may lead to CQ overrun 00:07:49.151 passed 00:07:49.151 00:07:49.151 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.151 suites 1 1 n/a 0 0 00:07:49.151 tests 9 9 9 0 0 00:07:49.151 asserts 579 579 579 0 n/a 00:07:49.151 00:07:49.151 Elapsed time = 0.003 seconds 00:07:49.151 [2024-07-13 11:19:23.732422] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:07:49.151 [2024-07-13 11:19:23.732487] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:07:49.151 00:07:49.151 real 0m0.044s 00:07:49.151 user 0m0.023s 00:07:49.151 sys 0m0.021s 00:07:49.151 ************************************ 00:07:49.151 END TEST unittest_nvmf_rdma 00:07:49.151 ************************************ 00:07:49.151 11:19:23 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.151 11:19:23 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 11:19:23 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:49.151 11:19:23 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:49.151 11:19:23 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:07:49.151 11:19:23 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.151 11:19:23 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.151 11:19:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:49.151 ************************************ 00:07:49.151 START TEST unittest_scsi 00:07:49.151 ************************************ 00:07:49.151 11:19:23 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:07:49.151 11:19:23 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:07:49.151 00:07:49.151 00:07:49.151 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.151 http://cunit.sourceforge.net/ 00:07:49.151 00:07:49.151 00:07:49.151 Suite: dev_suite 00:07:49.151 Test: dev_destruct_null_dev ...passed 00:07:49.151 Test: dev_destruct_zero_luns ...passed 00:07:49.151 Test: dev_destruct_null_lun ...passed 00:07:49.151 Test: dev_destruct_success ...passed 00:07:49.151 Test: dev_construct_num_luns_zero ...[2024-07-13 11:19:23.821144] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:07:49.151 passed 00:07:49.151 Test: dev_construct_no_lun_zero ...[2024-07-13 11:19:23.821472] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:07:49.151 passed 00:07:49.151 Test: dev_construct_null_lun ...passed 00:07:49.151 Test: dev_construct_name_too_long ...passed 00:07:49.151 Test: dev_construct_success ...passed 00:07:49.151 Test: dev_construct_success_lun_zero_not_first ...[2024-07-13 11:19:23.821515] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:07:49.152 [2024-07-13 11:19:23.821551] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:07:49.152 passed 00:07:49.152 Test: dev_queue_mgmt_task_success ...passed 00:07:49.152 Test: dev_queue_task_success ...passed 00:07:49.152 Test: dev_stop_success ...passed 00:07:49.152 Test: dev_add_port_max_ports ...[2024-07-13 11:19:23.821806] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:07:49.152 passed 00:07:49.152 Test: dev_add_port_construct_failure1 ...passed 00:07:49.152 Test: dev_add_port_construct_failure2 ...passed 00:07:49.152 Test: dev_add_port_success1 ...passed 00:07:49.152 Test: dev_add_port_success2 ...passed 00:07:49.152 Test: dev_add_port_success3 ...passed 00:07:49.152 Test: dev_find_port_by_id_num_ports_zero ...passed 00:07:49.152 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:07:49.152 Test: dev_find_port_by_id_success ...passed 00:07:49.152 Test: dev_add_lun_bdev_not_found ...passed 00:07:49.152 Test: dev_add_lun_no_free_lun_id ...[2024-07-13 11:19:23.821890] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:07:49.152 [2024-07-13 11:19:23.821977] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:07:49.152 passed 00:07:49.152 Test: dev_add_lun_success1 ...[2024-07-13 11:19:23.822296] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:07:49.152 passed 00:07:49.152 Test: dev_add_lun_success2 ...passed 00:07:49.152 Test: dev_check_pending_tasks ...passed 00:07:49.152 Test: dev_iterate_luns ...passed 00:07:49.152 Test: dev_find_free_lun ...passed 00:07:49.152 00:07:49.152 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.152 suites 1 1 n/a 0 0 00:07:49.152 tests 29 29 29 0 0 00:07:49.152 asserts 97 97 97 0 n/a 00:07:49.152 00:07:49.152 Elapsed time = 0.002 seconds 00:07:49.152 11:19:23 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:07:49.152 00:07:49.152 00:07:49.152 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.152 http://cunit.sourceforge.net/ 00:07:49.152 00:07:49.152 00:07:49.152 Suite: lun_suite 00:07:49.152 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:07:49.152 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:07:49.152 Test: lun_task_mgmt_execute_lun_reset ...passed 00:07:49.152 Test: lun_task_mgmt_execute_target_reset ...passed 00:07:49.152 Test: lun_task_mgmt_execute_invalid_case ...passed 00:07:49.152 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-07-13 11:19:23.858885] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:07:49.152 [2024-07-13 11:19:23.859198] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:07:49.152 [2024-07-13 11:19:23.859318] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:07:49.152 passed 00:07:49.152 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:07:49.152 Test: lun_append_task_null_lun_not_supported ...passed 00:07:49.152 Test: lun_execute_scsi_task_pending ...passed 00:07:49.152 Test: lun_execute_scsi_task_complete ...passed 00:07:49.152 Test: lun_execute_scsi_task_resize ...passed 00:07:49.152 Test: lun_destruct_success ...passed 00:07:49.152 Test: lun_construct_null_ctx ...passed 00:07:49.152 Test: lun_construct_success ...passed 00:07:49.152 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:07:49.152 Test: lun_reset_task_suspend_scsi_task ...passed 00:07:49.152 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:07:49.152 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:07:49.152 00:07:49.152 [2024-07-13 11:19:23.859503] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:07:49.152 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.152 suites 1 1 n/a 0 0 00:07:49.152 tests 18 18 18 0 0 00:07:49.152 asserts 153 153 153 0 n/a 00:07:49.152 00:07:49.152 Elapsed time = 0.001 seconds 00:07:49.152 11:19:23 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:07:49.411 00:07:49.411 00:07:49.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.411 http://cunit.sourceforge.net/ 00:07:49.411 00:07:49.411 00:07:49.411 Suite: scsi_suite 00:07:49.411 Test: scsi_init ...passed 00:07:49.411 00:07:49.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.411 suites 1 1 n/a 0 0 00:07:49.411 tests 1 1 1 0 0 00:07:49.411 asserts 1 1 1 0 n/a 00:07:49.411 00:07:49.411 Elapsed time = 0.000 seconds 00:07:49.411 11:19:23 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:07:49.411 00:07:49.411 00:07:49.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.411 http://cunit.sourceforge.net/ 00:07:49.411 00:07:49.411 00:07:49.411 Suite: translation_suite 00:07:49.411 Test: mode_select_6_test ...passed 00:07:49.411 Test: mode_select_6_test2 ...passed 00:07:49.411 Test: mode_sense_6_test ...passed 00:07:49.411 Test: mode_sense_10_test ...passed 00:07:49.411 Test: inquiry_evpd_test ...passed 00:07:49.411 Test: inquiry_standard_test ...passed 00:07:49.411 Test: inquiry_overflow_test ...passed 00:07:49.411 Test: task_complete_test ...passed 00:07:49.411 Test: lba_range_test ...passed 00:07:49.411 Test: xfer_len_test ...passed 00:07:49.411 Test: xfer_test ...[2024-07-13 11:19:23.926532] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:07:49.411 passed 00:07:49.411 Test: scsi_name_padding_test ...passed 00:07:49.411 Test: get_dif_ctx_test ...passed 00:07:49.411 Test: unmap_split_test ...passed 00:07:49.411 00:07:49.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.411 suites 1 1 n/a 0 0 00:07:49.411 tests 14 14 14 0 0 00:07:49.411 asserts 1205 1205 1205 0 n/a 00:07:49.411 00:07:49.411 Elapsed time = 0.004 seconds 00:07:49.411 11:19:23 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:07:49.411 00:07:49.411 00:07:49.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.411 http://cunit.sourceforge.net/ 00:07:49.411 00:07:49.411 00:07:49.411 Suite: reservation_suite 00:07:49.411 Test: test_reservation_register ...[2024-07-13 11:19:23.957404] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:49.411 passed 00:07:49.411 Test: test_reservation_reserve ...[2024-07-13 11:19:23.957872] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:49.411 [2024-07-13 11:19:23.957970] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:07:49.411 passed 00:07:49.411 Test: test_all_registrant_reservation_reserve ...[2024-07-13 11:19:23.958086] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:07:49.411 [2024-07-13 11:19:23.958176] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:49.411 passed 00:07:49.411 Test: test_all_registrant_reservation_access ...passed 00:07:49.411 Test: test_reservation_preempt_non_all_regs ...[2024-07-13 11:19:23.958339] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:49.411 [2024-07-13 11:19:23.958419] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:07:49.411 [2024-07-13 11:19:23.958488] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:07:49.411 passed 00:07:49.411 Test: test_reservation_preempt_all_regs ...[2024-07-13 11:19:23.958569] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:49.411 [2024-07-13 11:19:23.958649] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:07:49.411 [2024-07-13 11:19:23.958812] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:49.411 passed 00:07:49.411 Test: test_reservation_cmds_conflict ...passed 00:07:49.411 Test: test_scsi2_reserve_release ...passed[2024-07-13 11:19:23.958994] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:49.411 [2024-07-13 11:19:23.959081] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:07:49.411 [2024-07-13 11:19:23.959177] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:49.411 [2024-07-13 11:19:23.959212] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:49.411 [2024-07-13 11:19:23.959253] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:07:49.411 [2024-07-13 11:19:23.959286] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:07:49.411 00:07:49.411 Test: test_pr_with_scsi2_reserve_release ...passed 00:07:49.411 00:07:49.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.411 suites 1 1 n/a 0 0 00:07:49.411 tests 9 9 9 0 0 00:07:49.411 asserts 344 344 344 0 n/a 00:07:49.411 00:07:49.411 Elapsed time = 0.002 seconds 00:07:49.411 [2024-07-13 11:19:23.959410] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:07:49.411 00:07:49.411 real 0m0.172s 00:07:49.411 user 0m0.107s 00:07:49.411 sys 0m0.066s 00:07:49.411 ************************************ 00:07:49.411 END TEST unittest_scsi 00:07:49.411 ************************************ 00:07:49.411 11:19:23 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.411 11:19:23 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:07:49.411 11:19:24 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:49.411 11:19:24 unittest -- unit/unittest.sh@278 -- # uname -s 00:07:49.411 11:19:24 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:07:49.411 11:19:24 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:07:49.411 11:19:24 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.411 11:19:24 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.411 11:19:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:49.411 ************************************ 00:07:49.411 START TEST unittest_sock 00:07:49.411 ************************************ 00:07:49.411 11:19:24 unittest.unittest_sock -- common/autotest_common.sh@1123 -- # unittest_sock 00:07:49.411 11:19:24 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:07:49.411 00:07:49.411 00:07:49.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.411 http://cunit.sourceforge.net/ 00:07:49.411 00:07:49.411 00:07:49.411 Suite: sock 00:07:49.411 Test: posix_sock ...passed 00:07:49.411 Test: ut_sock ...passed 00:07:49.411 Test: posix_sock_group ...passed 00:07:49.411 Test: ut_sock_group ...passed 00:07:49.411 Test: posix_sock_group_fairness ...passed 00:07:49.411 Test: _posix_sock_close ...passed 00:07:49.411 Test: sock_get_default_opts ...passed 00:07:49.411 Test: ut_sock_impl_get_set_opts ...passed 00:07:49.411 Test: posix_sock_impl_get_set_opts ...passed 00:07:49.411 Test: ut_sock_map ...passed 00:07:49.411 Test: override_impl_opts ...passed 00:07:49.411 Test: ut_sock_group_get_ctx ...passed 00:07:49.411 00:07:49.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.411 suites 1 1 n/a 0 0 00:07:49.411 tests 12 12 12 0 0 00:07:49.411 asserts 349 349 349 0 n/a 00:07:49.411 00:07:49.411 Elapsed time = 0.008 seconds 00:07:49.411 11:19:24 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:07:49.411 00:07:49.411 00:07:49.411 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.411 http://cunit.sourceforge.net/ 00:07:49.411 00:07:49.411 00:07:49.411 Suite: posix 00:07:49.411 Test: flush ...passed 00:07:49.411 00:07:49.411 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.411 suites 1 1 n/a 0 0 00:07:49.411 tests 1 1 1 0 0 00:07:49.411 asserts 28 28 28 0 n/a 00:07:49.411 00:07:49.411 Elapsed time = 0.000 seconds 00:07:49.411 11:19:24 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:49.411 00:07:49.411 real 0m0.106s 00:07:49.411 user 0m0.033s 00:07:49.411 sys 0m0.048s 00:07:49.411 ************************************ 00:07:49.411 11:19:24 unittest.unittest_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.411 11:19:24 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:07:49.411 END TEST unittest_sock 00:07:49.411 ************************************ 00:07:49.669 11:19:24 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:49.669 11:19:24 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:49.669 11:19:24 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.669 11:19:24 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.669 11:19:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:49.669 ************************************ 00:07:49.669 START TEST unittest_thread 00:07:49.669 ************************************ 00:07:49.669 11:19:24 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:07:49.669 00:07:49.669 00:07:49.669 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.669 http://cunit.sourceforge.net/ 00:07:49.669 00:07:49.669 00:07:49.669 Suite: io_channel 00:07:49.669 Test: thread_alloc ...passed 00:07:49.669 Test: thread_send_msg ...passed 00:07:49.669 Test: thread_poller ...passed 00:07:49.669 Test: poller_pause ...passed 00:07:49.669 Test: thread_for_each ...passed 00:07:49.669 Test: for_each_channel_remove ...passed 00:07:49.669 Test: for_each_channel_unreg ...[2024-07-13 11:19:24.213794] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x7ffc1d89ee90 already registered (old:0x613000000200 new:0x6130000003c0) 00:07:49.669 passed 00:07:49.669 Test: thread_name ...passed 00:07:49.669 Test: channel ...[2024-07-13 11:19:24.216767] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x562acfa00180 00:07:49.669 passed 00:07:49.669 Test: channel_destroy_races ...passed 00:07:49.669 Test: thread_exit_test ...[2024-07-13 11:19:24.220331] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x619000007380 got timeout, and move it to the exited state forcefully 00:07:49.669 passed 00:07:49.669 Test: thread_update_stats_test ...passed 00:07:49.669 Test: nested_channel ...passed 00:07:49.669 Test: device_unregister_and_thread_exit_race ...passed 00:07:49.669 Test: cache_closest_timed_poller ...passed 00:07:49.669 Test: multi_timed_pollers_have_same_expiration ...passed 00:07:49.669 Test: io_device_lookup ...passed 00:07:49.670 Test: spdk_spin ...[2024-07-13 11:19:24.227958] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:49.670 [2024-07-13 11:19:24.227995] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1d89ee80 00:07:49.670 [2024-07-13 11:19:24.228072] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:07:49.670 [2024-07-13 11:19:24.229192] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:07:49.670 [2024-07-13 11:19:24.229246] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1d89ee80 00:07:49.670 [2024-07-13 11:19:24.229265] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:49.670 [2024-07-13 11:19:24.229285] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1d89ee80 00:07:49.670 [2024-07-13 11:19:24.229323] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:07:49.670 [2024-07-13 11:19:24.229349] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1d89ee80 00:07:49.670 [2024-07-13 11:19:24.229367] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:07:49.670 [2024-07-13 11:19:24.229398] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7ffc1d89ee80 00:07:49.670 passed 00:07:49.670 Test: for_each_channel_and_thread_exit_race ...passed 00:07:49.670 Test: for_each_thread_and_thread_exit_race ...passed 00:07:49.670 00:07:49.670 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.670 suites 1 1 n/a 0 0 00:07:49.670 tests 20 20 20 0 0 00:07:49.670 asserts 409 409 409 0 n/a 00:07:49.670 00:07:49.670 Elapsed time = 0.034 seconds 00:07:49.670 00:07:49.670 real 0m0.075s 00:07:49.670 user 0m0.048s 00:07:49.670 sys 0m0.027s 00:07:49.670 11:19:24 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.670 11:19:24 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 ************************************ 00:07:49.670 END TEST unittest_thread 00:07:49.670 ************************************ 00:07:49.670 11:19:24 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:49.670 11:19:24 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:49.670 11:19:24 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.670 11:19:24 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.670 11:19:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 ************************************ 00:07:49.670 START TEST unittest_iobuf 00:07:49.670 ************************************ 00:07:49.670 11:19:24 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:07:49.670 00:07:49.670 00:07:49.670 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.670 http://cunit.sourceforge.net/ 00:07:49.670 00:07:49.670 00:07:49.670 Suite: io_channel 00:07:49.670 Test: iobuf ...passed 00:07:49.670 Test: iobuf_cache ...[2024-07-13 11:19:24.331580] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:49.670 [2024-07-13 11:19:24.331977] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:49.670 [2024-07-13 11:19:24.332112] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:07:49.670 [2024-07-13 11:19:24.332158] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:49.670 [2024-07-13 11:19:24.332232] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:07:49.670 [2024-07-13 11:19:24.332268] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:07:49.670 passed 00:07:49.670 00:07:49.670 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.670 suites 1 1 n/a 0 0 00:07:49.670 tests 2 2 2 0 0 00:07:49.670 asserts 107 107 107 0 n/a 00:07:49.670 00:07:49.670 Elapsed time = 0.006 seconds 00:07:49.670 00:07:49.670 real 0m0.042s 00:07:49.670 user 0m0.021s 00:07:49.670 sys 0m0.021s 00:07:49.670 ************************************ 00:07:49.670 END TEST unittest_iobuf 00:07:49.670 ************************************ 00:07:49.670 11:19:24 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.670 11:19:24 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 11:19:24 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:49.670 11:19:24 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:07:49.670 11:19:24 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.670 11:19:24 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.670 11:19:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:49.670 ************************************ 00:07:49.670 START TEST unittest_util 00:07:49.670 ************************************ 00:07:49.670 11:19:24 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:07:49.670 11:19:24 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:07:49.670 00:07:49.670 00:07:49.670 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.670 http://cunit.sourceforge.net/ 00:07:49.670 00:07:49.670 00:07:49.670 Suite: base64 00:07:49.670 Test: test_base64_get_encoded_strlen ...passed 00:07:49.670 Test: test_base64_get_decoded_len ...passed 00:07:49.670 Test: test_base64_encode ...passed 00:07:49.670 Test: test_base64_decode ...passed 00:07:49.670 Test: test_base64_urlsafe_encode ...passed 00:07:49.670 Test: test_base64_urlsafe_decode ...passed 00:07:49.670 00:07:49.670 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.670 suites 1 1 n/a 0 0 00:07:49.670 tests 6 6 6 0 0 00:07:49.670 asserts 112 112 112 0 n/a 00:07:49.670 00:07:49.670 Elapsed time = 0.000 seconds 00:07:49.928 11:19:24 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:07:49.928 00:07:49.928 00:07:49.928 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.928 http://cunit.sourceforge.net/ 00:07:49.928 00:07:49.928 00:07:49.928 Suite: bit_array 00:07:49.928 Test: test_1bit ...passed 00:07:49.928 Test: test_64bit ...passed 00:07:49.928 Test: test_find ...passed 00:07:49.928 Test: test_resize ...passed 00:07:49.928 Test: test_errors ...passed 00:07:49.928 Test: test_count ...passed 00:07:49.928 Test: test_mask_store_load ...passed 00:07:49.928 Test: test_mask_clear ...passed 00:07:49.928 00:07:49.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.928 suites 1 1 n/a 0 0 00:07:49.928 tests 8 8 8 0 0 00:07:49.928 asserts 5075 5075 5075 0 n/a 00:07:49.928 00:07:49.928 Elapsed time = 0.001 seconds 00:07:49.928 11:19:24 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:07:49.928 00:07:49.928 00:07:49.928 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.928 http://cunit.sourceforge.net/ 00:07:49.928 00:07:49.928 00:07:49.928 Suite: cpuset 00:07:49.928 Test: test_cpuset ...passed 00:07:49.928 Test: test_cpuset_parse ...[2024-07-13 11:19:24.467705] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:07:49.928 [2024-07-13 11:19:24.467981] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:07:49.928 [2024-07-13 11:19:24.468053] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:07:49.928 [2024-07-13 11:19:24.468117] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:07:49.928 [2024-07-13 11:19:24.468140] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:07:49.928 [2024-07-13 11:19:24.468168] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:07:49.928 [2024-07-13 11:19:24.468189] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:07:49.928 passed 00:07:49.928 Test: test_cpuset_fmt ...[2024-07-13 11:19:24.468228] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:07:49.928 passed 00:07:49.928 Test: test_cpuset_foreach ...passed 00:07:49.928 00:07:49.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.928 suites 1 1 n/a 0 0 00:07:49.928 tests 4 4 4 0 0 00:07:49.928 asserts 90 90 90 0 n/a 00:07:49.928 00:07:49.928 Elapsed time = 0.002 seconds 00:07:49.928 11:19:24 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:07:49.928 00:07:49.928 00:07:49.928 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.928 http://cunit.sourceforge.net/ 00:07:49.928 00:07:49.928 00:07:49.928 Suite: crc16 00:07:49.928 Test: test_crc16_t10dif ...passed 00:07:49.928 Test: test_crc16_t10dif_seed ...passed 00:07:49.928 Test: test_crc16_t10dif_copy ...passed 00:07:49.928 00:07:49.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.928 suites 1 1 n/a 0 0 00:07:49.928 tests 3 3 3 0 0 00:07:49.928 asserts 5 5 5 0 n/a 00:07:49.928 00:07:49.928 Elapsed time = 0.000 seconds 00:07:49.928 11:19:24 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:07:49.928 00:07:49.928 00:07:49.928 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.928 http://cunit.sourceforge.net/ 00:07:49.928 00:07:49.928 00:07:49.928 Suite: crc32_ieee 00:07:49.928 Test: test_crc32_ieee ...passed 00:07:49.928 00:07:49.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.928 suites 1 1 n/a 0 0 00:07:49.928 tests 1 1 1 0 0 00:07:49.928 asserts 1 1 1 0 n/a 00:07:49.928 00:07:49.928 Elapsed time = 0.000 seconds 00:07:49.928 11:19:24 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:07:49.928 00:07:49.928 00:07:49.928 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.928 http://cunit.sourceforge.net/ 00:07:49.928 00:07:49.928 00:07:49.928 Suite: crc32c 00:07:49.928 Test: test_crc32c ...passed 00:07:49.928 Test: test_crc32c_nvme ...passed 00:07:49.928 00:07:49.928 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.928 suites 1 1 n/a 0 0 00:07:49.929 tests 2 2 2 0 0 00:07:49.929 asserts 16 16 16 0 n/a 00:07:49.929 00:07:49.929 Elapsed time = 0.001 seconds 00:07:49.929 11:19:24 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:07:49.929 00:07:49.929 00:07:49.929 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.929 http://cunit.sourceforge.net/ 00:07:49.929 00:07:49.929 00:07:49.929 Suite: crc64 00:07:49.929 Test: test_crc64_nvme ...passed 00:07:49.929 00:07:49.929 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.929 suites 1 1 n/a 0 0 00:07:49.929 tests 1 1 1 0 0 00:07:49.929 asserts 4 4 4 0 n/a 00:07:49.929 00:07:49.929 Elapsed time = 0.000 seconds 00:07:49.929 11:19:24 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:07:49.929 00:07:49.929 00:07:49.929 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.929 http://cunit.sourceforge.net/ 00:07:49.929 00:07:49.929 00:07:49.929 Suite: string 00:07:49.929 Test: test_parse_ip_addr ...passed 00:07:49.929 Test: test_str_chomp ...passed 00:07:49.929 Test: test_parse_capacity ...passed 00:07:49.929 Test: test_sprintf_append_realloc ...passed 00:07:49.929 Test: test_strtol ...passed 00:07:49.929 Test: test_strtoll ...passed 00:07:49.929 Test: test_strarray ...passed 00:07:49.929 Test: test_strcpy_replace ...passed 00:07:49.929 00:07:49.929 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.929 suites 1 1 n/a 0 0 00:07:49.929 tests 8 8 8 0 0 00:07:49.929 asserts 161 161 161 0 n/a 00:07:49.929 00:07:49.929 Elapsed time = 0.001 seconds 00:07:49.929 11:19:24 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:07:49.929 00:07:49.929 00:07:49.929 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.929 http://cunit.sourceforge.net/ 00:07:49.929 00:07:49.929 00:07:49.929 Suite: dif 00:07:49.929 Test: dif_generate_and_verify_test ...[2024-07-13 11:19:24.646870] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:49.929 [2024-07-13 11:19:24.647365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:49.929 [2024-07-13 11:19:24.647648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:07:49.929 [2024-07-13 11:19:24.647914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:49.929 [2024-07-13 11:19:24.648215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:49.929 [2024-07-13 11:19:24.648494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:07:49.929 passed 00:07:49.929 Test: dif_disable_check_test ...[2024-07-13 11:19:24.649489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:49.929 [2024-07-13 11:19:24.649793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:49.929 [2024-07-13 11:19:24.650063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:07:49.929 passed 00:07:49.929 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-13 11:19:24.651106] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:07:49.929 [2024-07-13 11:19:24.651422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:07:49.929 [2024-07-13 11:19:24.651722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:07:49.929 [2024-07-13 11:19:24.652058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:07:49.929 [2024-07-13 11:19:24.652366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:49.929 [2024-07-13 11:19:24.652660] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:49.929 [2024-07-13 11:19:24.652953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:49.929 [2024-07-13 11:19:24.653247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:07:49.929 [2024-07-13 11:19:24.653550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:49.929 [2024-07-13 11:19:24.653865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:49.929 [2024-07-13 11:19:24.654182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:07:49.929 passed 00:07:49.929 Test: dif_apptag_mask_test ...[2024-07-13 11:19:24.654483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:49.929 [2024-07-13 11:19:24.654773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:07:49.929 passed 00:07:49.929 Test: dif_sec_512_md_0_error_test ...passed 00:07:49.929 Test: dif_sec_4096_md_0_error_test ...passed 00:07:49.929 Test: dif_sec_4100_md_128_error_test ...passed[2024-07-13 11:19:24.655009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:49.929 [2024-07-13 11:19:24.655048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:49.929 [2024-07-13 11:19:24.655081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:49.929 [2024-07-13 11:19:24.655141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:49.929 [2024-07-13 11:19:24.655172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:07:49.929 00:07:49.929 Test: dif_guard_seed_test ...passed 00:07:49.929 Test: dif_guard_value_test ...passed 00:07:49.929 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:07:49.929 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:07:49.929 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:49.929 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:50.189 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:50.189 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:07:50.189 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:50.189 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:07:50.189 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:07:50.189 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:50.189 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:07:50.189 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:07:50.189 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:50.189 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:07:50.189 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:50.189 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:07:50.189 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:50.189 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:50.189 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 11:19:24.698871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.189 [2024-07-13 11:19:24.701289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:50.189 [2024-07-13 11:19:24.703721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.189 [2024-07-13 11:19:24.706130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.189 [2024-07-13 11:19:24.708578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.189 [2024-07-13 11:19:24.710989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.189 [2024-07-13 11:19:24.713400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.189 [2024-07-13 11:19:24.714518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=396e 00:07:50.189 [2024-07-13 11:19:24.715661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.189 [2024-07-13 11:19:24.718062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:07:50.189 [2024-07-13 11:19:24.720502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.189 [2024-07-13 11:19:24.722919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.189 [2024-07-13 11:19:24.725333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.189 [2024-07-13 11:19:24.727752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.189 [2024-07-13 11:19:24.730162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.190 [2024-07-13 11:19:24.731295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a2a5258e 00:07:50.190 [2024-07-13 11:19:24.732432] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.190 [2024-07-13 11:19:24.734835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88014a2d4837a266, Actual=88010a2d4837a266 00:07:50.190 [2024-07-13 11:19:24.737253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.739684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.742088] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.744515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.746942] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.190 passed 00:07:50.190 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-13 11:19:24.748072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=78986d3fdad9e508 00:07:50.190 [2024-07-13 11:19:24.748307] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.190 [2024-07-13 11:19:24.748598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:50.190 [2024-07-13 11:19:24.748876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.749164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.749469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.749748] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.750039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.190 [2024-07-13 11:19:24.750226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=396e 00:07:50.190 [2024-07-13 11:19:24.750423] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.190 [2024-07-13 11:19:24.750698] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:07:50.190 [2024-07-13 11:19:24.751013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.751322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.751618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.190 [2024-07-13 11:19:24.751905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.190 [2024-07-13 11:19:24.752195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.190 [2024-07-13 11:19:24.752384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a2a5258e 00:07:50.190 [2024-07-13 11:19:24.752598] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.190 [2024-07-13 11:19:24.752878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88014a2d4837a266, Actual=88010a2d4837a266 00:07:50.190 [2024-07-13 11:19:24.753162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.753442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.753733] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.754014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.754312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.190 [2024-07-13 11:19:24.754512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=78986d3fdad9e508 00:07:50.190 passed 00:07:50.190 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-13 11:19:24.754746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.190 [2024-07-13 11:19:24.755050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:50.190 [2024-07-13 11:19:24.755343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.755633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.755929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.756214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.756495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.190 [2024-07-13 11:19:24.756691] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=396e 00:07:50.190 [2024-07-13 11:19:24.756882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.190 [2024-07-13 11:19:24.757164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:07:50.190 [2024-07-13 11:19:24.757444] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.757724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.758008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.190 [2024-07-13 11:19:24.758289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.190 [2024-07-13 11:19:24.758566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.190 [2024-07-13 11:19:24.758766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a2a5258e 00:07:50.190 [2024-07-13 11:19:24.758989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.190 [2024-07-13 11:19:24.759282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88014a2d4837a266, Actual=88010a2d4837a266 00:07:50.190 [2024-07-13 11:19:24.759576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.759864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.760157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.760438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.760739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.190 [2024-07-13 11:19:24.760939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=78986d3fdad9e508 00:07:50.190 passed 00:07:50.190 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-13 11:19:24.761178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.190 [2024-07-13 11:19:24.761476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:50.190 [2024-07-13 11:19:24.761765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.762045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.762358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.762639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.762938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.190 [2024-07-13 11:19:24.763154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=396e 00:07:50.190 [2024-07-13 11:19:24.763361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.190 [2024-07-13 11:19:24.763639] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:07:50.190 [2024-07-13 11:19:24.763942] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.764232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.764518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.190 [2024-07-13 11:19:24.764807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.190 [2024-07-13 11:19:24.765094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.190 [2024-07-13 11:19:24.765293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a2a5258e 00:07:50.190 [2024-07-13 11:19:24.765500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.190 [2024-07-13 11:19:24.765796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88014a2d4837a266, Actual=88010a2d4837a266 00:07:50.190 [2024-07-13 11:19:24.766077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.766365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.190 [2024-07-13 11:19:24.766652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.766952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.190 [2024-07-13 11:19:24.767269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.191 [2024-07-13 11:19:24.767474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=78986d3fdad9e508 00:07:50.191 passed 00:07:50.191 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-13 11:19:24.767723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.191 [2024-07-13 11:19:24.768003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:50.191 [2024-07-13 11:19:24.768291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.768580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.768885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.769166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.769453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.191 [2024-07-13 11:19:24.769644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=396e 00:07:50.191 passed 00:07:50.191 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-13 11:19:24.769894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.191 [2024-07-13 11:19:24.770184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:07:50.191 [2024-07-13 11:19:24.770495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.770777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.771077] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.191 [2024-07-13 11:19:24.771373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.191 [2024-07-13 11:19:24.771667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.191 [2024-07-13 11:19:24.771863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a2a5258e 00:07:50.191 [2024-07-13 11:19:24.772100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.191 [2024-07-13 11:19:24.772390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88014a2d4837a266, Actual=88010a2d4837a266 00:07:50.191 [2024-07-13 11:19:24.772670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.772961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.773243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.773531] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.773828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.191 [2024-07-13 11:19:24.774028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=78986d3fdad9e508 00:07:50.191 passed 00:07:50.191 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-13 11:19:24.774246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.191 [2024-07-13 11:19:24.774551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:07:50.191 [2024-07-13 11:19:24.774830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.775149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.775469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.775751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.776041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.191 [2024-07-13 11:19:24.776232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=396e 00:07:50.191 passed 00:07:50.191 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-13 11:19:24.776463] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.191 [2024-07-13 11:19:24.776743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38570660, Actual=38574660 00:07:50.191 [2024-07-13 11:19:24.777046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.777337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.777644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.191 [2024-07-13 11:19:24.777924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.191 [2024-07-13 11:19:24.778215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.191 [2024-07-13 11:19:24.778407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=a2a5258e 00:07:50.191 [2024-07-13 11:19:24.778650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.191 [2024-07-13 11:19:24.778949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88014a2d4837a266, Actual=88010a2d4837a266 00:07:50.191 [2024-07-13 11:19:24.779260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.779544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.779834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.780115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.780425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.191 passed 00:07:50.191 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...[2024-07-13 11:19:24.780631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=78986d3fdad9e508 00:07:50.191 passed 00:07:50.191 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:50.191 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:50.191 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:50.191 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:50.191 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:50.191 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:50.191 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:50.191 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:50.191 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 11:19:24.824352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.191 [2024-07-13 11:19:24.825462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:07:50.191 [2024-07-13 11:19:24.826554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.827650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.828732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.829805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.830892] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.191 [2024-07-13 11:19:24.831969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=ea59 00:07:50.191 [2024-07-13 11:19:24.833049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.191 [2024-07-13 11:19:24.834126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=775ceec9, Actual=775caec9 00:07:50.191 [2024-07-13 11:19:24.835240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.836345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.837418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.191 [2024-07-13 11:19:24.838516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.191 [2024-07-13 11:19:24.839630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.191 [2024-07-13 11:19:24.840719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=70542342 00:07:50.191 [2024-07-13 11:19:24.841800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.191 [2024-07-13 11:19:24.842924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=af83c7e14798e172, Actual=af8387e14798e172 00:07:50.191 [2024-07-13 11:19:24.844008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.845087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.191 [2024-07-13 11:19:24.846158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.847260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.191 [2024-07-13 11:19:24.848340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.191 [2024-07-13 11:19:24.849438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=c8ff4d09c43477cc 00:07:50.191 passed 00:07:50.191 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-13 11:19:24.849767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.192 [2024-07-13 11:19:24.850018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:07:50.192 [2024-07-13 11:19:24.850272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.850519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.850790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.851085] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.851344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.192 [2024-07-13 11:19:24.851602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=ea59 00:07:50.192 [2024-07-13 11:19:24.851848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.192 [2024-07-13 11:19:24.852106] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=775ceec9, Actual=775caec9 00:07:50.192 [2024-07-13 11:19:24.852376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.852632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.852882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.192 [2024-07-13 11:19:24.853138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.192 [2024-07-13 11:19:24.853387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.192 [2024-07-13 11:19:24.853640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=70542342 00:07:50.192 [2024-07-13 11:19:24.853910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.192 [2024-07-13 11:19:24.854165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=af83c7e14798e172, Actual=af8387e14798e172 00:07:50.192 [2024-07-13 11:19:24.854425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.854684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.854955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.855225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.855504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.192 [2024-07-13 11:19:24.855766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=c8ff4d09c43477cc 00:07:50.192 passed 00:07:50.192 Test: dix_sec_512_md_0_error ...passed 00:07:50.192 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-13 11:19:24.855821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:07:50.192 passed 00:07:50.192 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:07:50.192 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:07:50.192 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:07:50.192 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:07:50.192 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:07:50.192 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:07:50.192 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:07:50.192 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:07:50.192 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-13 11:19:24.898930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.192 [2024-07-13 11:19:24.900020] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:07:50.192 [2024-07-13 11:19:24.901095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.902159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.903284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.904374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.905438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.192 [2024-07-13 11:19:24.906514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=ea59 00:07:50.192 [2024-07-13 11:19:24.907602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.192 [2024-07-13 11:19:24.908680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=775ceec9, Actual=775caec9 00:07:50.192 [2024-07-13 11:19:24.909760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.910833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.911926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.192 [2024-07-13 11:19:24.913002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.192 [2024-07-13 11:19:24.914075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.192 [2024-07-13 11:19:24.915168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=70542342 00:07:50.192 [2024-07-13 11:19:24.916269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.192 [2024-07-13 11:19:24.917334] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=af83c7e14798e172, Actual=af8387e14798e172 00:07:50.192 [2024-07-13 11:19:24.918408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.919495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.920582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.921647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.922734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.192 passed 00:07:50.192 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-13 11:19:24.923820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=c8ff4d09c43477cc 00:07:50.192 [2024-07-13 11:19:24.924173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:07:50.192 [2024-07-13 11:19:24.924428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:07:50.192 [2024-07-13 11:19:24.924682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.924939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.925214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.925469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.192 [2024-07-13 11:19:24.925729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=7680 00:07:50.192 [2024-07-13 11:19:24.925974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=ea59 00:07:50.192 [2024-07-13 11:19:24.926229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab713ed, Actual=1ab753ed 00:07:50.192 [2024-07-13 11:19:24.926485] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=775ceec9, Actual=775caec9 00:07:50.192 [2024-07-13 11:19:24.926752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.927027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.192 [2024-07-13 11:19:24.927292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.192 [2024-07-13 11:19:24.927550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=400000000058 00:07:50.192 [2024-07-13 11:19:24.927797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=e8209d95 00:07:50.192 [2024-07-13 11:19:24.928057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=70542342 00:07:50.192 [2024-07-13 11:19:24.928318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576e7728ecc20d3, Actual=a576a7728ecc20d3 00:07:50.192 [2024-07-13 11:19:24.928576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=af83c7e14798e172, Actual=af8387e14798e172 00:07:50.192 [2024-07-13 11:19:24.928824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.451 [2024-07-13 11:19:24.929076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:07:50.451 [2024-07-13 11:19:24.929321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.451 [2024-07-13 11:19:24.929577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4058 00:07:50.451 [2024-07-13 11:19:24.929828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=1592be8cd433286c 00:07:50.451 [2024-07-13 11:19:24.930078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=c8ff4d09c43477cc 00:07:50.451 passed 00:07:50.451 Test: set_md_interleave_iovs_test ...passed 00:07:50.451 Test: set_md_interleave_iovs_split_test ...passed 00:07:50.451 Test: dif_generate_stream_pi_16_test ...passed 00:07:50.451 Test: dif_generate_stream_test ...passed 00:07:50.451 Test: set_md_interleave_iovs_alignment_test ...[2024-07-13 11:19:24.937478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:07:50.451 passed 00:07:50.451 Test: dif_generate_split_test ...passed 00:07:50.451 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:07:50.451 Test: dif_verify_split_test ...passed 00:07:50.451 Test: dif_verify_stream_multi_segments_test ...passed 00:07:50.451 Test: update_crc32c_pi_16_test ...passed 00:07:50.451 Test: update_crc32c_test ...passed 00:07:50.451 Test: dif_update_crc32c_split_test ...passed 00:07:50.451 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:07:50.451 Test: get_range_with_md_test ...passed 00:07:50.451 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:07:50.451 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:07:50.451 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:50.451 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:07:50.451 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:07:50.451 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:07:50.451 Test: dif_generate_and_verify_unmap_test ...passed 00:07:50.451 00:07:50.451 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.451 suites 1 1 n/a 0 0 00:07:50.451 tests 79 79 79 0 0 00:07:50.451 asserts 3584 3584 3584 0 n/a 00:07:50.451 00:07:50.451 Elapsed time = 0.336 seconds 00:07:50.451 11:19:24 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:07:50.451 00:07:50.451 00:07:50.451 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.451 http://cunit.sourceforge.net/ 00:07:50.451 00:07:50.451 00:07:50.451 Suite: iov 00:07:50.451 Test: test_single_iov ...passed 00:07:50.451 Test: test_simple_iov ...passed 00:07:50.451 Test: test_complex_iov ...passed 00:07:50.451 Test: test_iovs_to_buf ...passed 00:07:50.451 Test: test_buf_to_iovs ...passed 00:07:50.451 Test: test_memset ...passed 00:07:50.451 Test: test_iov_one ...passed 00:07:50.451 Test: test_iov_xfer ...passed 00:07:50.451 00:07:50.451 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.451 suites 1 1 n/a 0 0 00:07:50.451 tests 8 8 8 0 0 00:07:50.451 asserts 156 156 156 0 n/a 00:07:50.451 00:07:50.451 Elapsed time = 0.000 seconds 00:07:50.451 11:19:25 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:07:50.451 00:07:50.451 00:07:50.451 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.451 http://cunit.sourceforge.net/ 00:07:50.451 00:07:50.451 00:07:50.451 Suite: math 00:07:50.451 Test: test_serial_number_arithmetic ...passed 00:07:50.451 Suite: erase 00:07:50.451 Test: test_memset_s ...passed 00:07:50.451 00:07:50.451 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.451 suites 2 2 n/a 0 0 00:07:50.451 tests 2 2 2 0 0 00:07:50.451 asserts 18 18 18 0 n/a 00:07:50.451 00:07:50.451 Elapsed time = 0.000 seconds 00:07:50.451 11:19:25 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:07:50.451 00:07:50.451 00:07:50.451 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.451 http://cunit.sourceforge.net/ 00:07:50.451 00:07:50.451 00:07:50.451 Suite: pipe 00:07:50.451 Test: test_create_destroy ...passed 00:07:50.451 Test: test_write_get_buffer ...passed 00:07:50.451 Test: test_write_advance ...passed 00:07:50.451 Test: test_read_get_buffer ...passed 00:07:50.451 Test: test_read_advance ...passed 00:07:50.451 Test: test_data ...passed 00:07:50.451 00:07:50.451 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.451 suites 1 1 n/a 0 0 00:07:50.451 tests 6 6 6 0 0 00:07:50.451 asserts 251 251 251 0 n/a 00:07:50.451 00:07:50.451 Elapsed time = 0.000 seconds 00:07:50.451 11:19:25 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:07:50.451 00:07:50.451 00:07:50.451 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.451 http://cunit.sourceforge.net/ 00:07:50.451 00:07:50.451 00:07:50.451 Suite: xor 00:07:50.451 Test: test_xor_gen ...passed 00:07:50.451 00:07:50.451 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.451 suites 1 1 n/a 0 0 00:07:50.451 tests 1 1 1 0 0 00:07:50.451 asserts 17 17 17 0 n/a 00:07:50.451 00:07:50.451 Elapsed time = 0.007 seconds 00:07:50.451 00:07:50.451 real 0m0.723s 00:07:50.451 user 0m0.581s 00:07:50.451 sys 0m0.146s 00:07:50.451 ************************************ 00:07:50.451 END TEST unittest_util 00:07:50.451 ************************************ 00:07:50.451 11:19:25 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.451 11:19:25 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:07:50.451 11:19:25 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:50.451 11:19:25 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:07:50.452 11:19:25 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:50.452 11:19:25 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.452 11:19:25 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.452 11:19:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:50.452 ************************************ 00:07:50.452 START TEST unittest_vhost 00:07:50.452 ************************************ 00:07:50.452 11:19:25 unittest.unittest_vhost -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:07:50.452 00:07:50.452 00:07:50.452 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.452 http://cunit.sourceforge.net/ 00:07:50.452 00:07:50.711 00:07:50.711 Suite: vhost_suite 00:07:50.711 Test: desc_to_iov_test ...[2024-07-13 11:19:25.195335] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:07:50.711 passed 00:07:50.711 Test: create_controller_test ...[2024-07-13 11:19:25.200389] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:50.711 [2024-07-13 11:19:25.200639] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:07:50.711 [2024-07-13 11:19:25.200894] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:07:50.711 [2024-07-13 11:19:25.201109] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:07:50.711 [2024-07-13 11:19:25.201286] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:07:50.711 [2024-07-13 11:19:25.201808] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:07:50.711 [2024-07-13 11:19:25.203157] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:07:50.711 passed 00:07:50.711 Test: session_find_by_vid_test ...passed 00:07:50.711 Test: remove_controller_test ...[2024-07-13 11:19:25.205774] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:07:50.711 passed 00:07:50.711 Test: vq_avail_ring_get_test ...passed 00:07:50.711 Test: vq_packed_ring_test ...passed 00:07:50.711 Test: vhost_blk_construct_test ...passed 00:07:50.711 00:07:50.711 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.711 suites 1 1 n/a 0 0 00:07:50.711 tests 7 7 7 0 0 00:07:50.711 asserts 147 147 147 0 n/a 00:07:50.711 00:07:50.711 Elapsed time = 0.014 seconds 00:07:50.711 00:07:50.711 real 0m0.053s 00:07:50.711 user 0m0.034s 00:07:50.711 sys 0m0.017s 00:07:50.711 11:19:25 unittest.unittest_vhost -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.711 ************************************ 00:07:50.711 END TEST unittest_vhost 00:07:50.711 ************************************ 00:07:50.711 11:19:25 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 11:19:25 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:50.711 11:19:25 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:50.711 11:19:25 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.711 11:19:25 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.711 11:19:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 ************************************ 00:07:50.711 START TEST unittest_dma 00:07:50.711 ************************************ 00:07:50.711 11:19:25 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:07:50.711 00:07:50.711 00:07:50.711 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.711 http://cunit.sourceforge.net/ 00:07:50.711 00:07:50.711 00:07:50.711 Suite: dma_suite 00:07:50.711 Test: test_dma ...[2024-07-13 11:19:25.293899] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:07:50.711 passed 00:07:50.711 00:07:50.711 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.711 suites 1 1 n/a 0 0 00:07:50.711 tests 1 1 1 0 0 00:07:50.711 asserts 54 54 54 0 n/a 00:07:50.711 00:07:50.711 Elapsed time = 0.001 seconds 00:07:50.711 00:07:50.711 real 0m0.034s 00:07:50.711 user 0m0.008s 00:07:50.711 sys 0m0.025s 00:07:50.711 11:19:25 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.711 11:19:25 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 ************************************ 00:07:50.711 END TEST unittest_dma 00:07:50.711 ************************************ 00:07:50.711 11:19:25 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:50.711 11:19:25 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:07:50.711 11:19:25 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.711 11:19:25 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.711 11:19:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:50.711 ************************************ 00:07:50.711 START TEST unittest_init 00:07:50.711 ************************************ 00:07:50.711 11:19:25 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:07:50.711 11:19:25 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:07:50.711 00:07:50.711 00:07:50.711 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.711 http://cunit.sourceforge.net/ 00:07:50.711 00:07:50.711 00:07:50.711 Suite: subsystem_suite 00:07:50.711 Test: subsystem_sort_test_depends_on_single ...passed 00:07:50.711 Test: subsystem_sort_test_depends_on_multiple ...passed 00:07:50.711 Test: subsystem_sort_test_missing_dependency ...[2024-07-13 11:19:25.378160] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:07:50.712 [2024-07-13 11:19:25.378498] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:07:50.712 passed 00:07:50.712 00:07:50.712 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.712 suites 1 1 n/a 0 0 00:07:50.712 tests 3 3 3 0 0 00:07:50.712 asserts 20 20 20 0 n/a 00:07:50.712 00:07:50.712 Elapsed time = 0.001 seconds 00:07:50.712 00:07:50.712 real 0m0.038s 00:07:50.712 user 0m0.017s 00:07:50.712 sys 0m0.021s 00:07:50.712 11:19:25 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.712 ************************************ 00:07:50.712 END TEST unittest_init 00:07:50.712 ************************************ 00:07:50.712 11:19:25 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:07:50.712 11:19:25 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:50.712 11:19:25 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:07:50.712 11:19:25 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.712 11:19:25 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.712 11:19:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:50.712 ************************************ 00:07:50.712 START TEST unittest_keyring 00:07:50.712 ************************************ 00:07:50.712 11:19:25 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:07:50.970 00:07:50.970 00:07:50.970 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.970 http://cunit.sourceforge.net/ 00:07:50.970 00:07:50.970 00:07:50.970 Suite: keyring 00:07:50.970 Test: test_keyring_add_remove ...[2024-07-13 11:19:25.462347] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:07:50.970 [2024-07-13 11:19:25.462788] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:07:50.970 [2024-07-13 11:19:25.462989] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:07:50.970 passed 00:07:50.970 Test: test_keyring_get_put ...passed 00:07:50.970 00:07:50.970 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.970 suites 1 1 n/a 0 0 00:07:50.970 tests 2 2 2 0 0 00:07:50.970 asserts 44 44 44 0 n/a 00:07:50.970 00:07:50.970 Elapsed time = 0.001 seconds 00:07:50.970 00:07:50.970 real 0m0.034s 00:07:50.970 user 0m0.019s 00:07:50.970 sys 0m0.014s 00:07:50.970 11:19:25 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.970 11:19:25 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:07:50.970 ************************************ 00:07:50.970 END TEST unittest_keyring 00:07:50.970 ************************************ 00:07:50.970 11:19:25 unittest -- common/autotest_common.sh@1142 -- # return 0 00:07:50.970 11:19:25 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:07:50.970 11:19:25 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:50.970 11:19:25 unittest -- unit/unittest.sh@293 -- # hostname 00:07:50.970 11:19:25 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:50.970 geninfo: WARNING: invalid characters removed from testname! 00:08:17.590 11:19:51 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:08:21.874 11:19:56 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:24.402 11:19:58 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:26.935 11:20:01 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:30.221 11:20:04 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:32.751 11:20:06 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:35.282 11:20:09 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:37.211 11:20:11 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:37.211 11:20:11 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:37.778 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:08:37.778 Found 324 entries. 00:08:37.778 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:08:37.778 Writing .css and .png files. 00:08:37.778 Generating output. 00:08:38.037 Processing file include/linux/virtio_ring.h 00:08:38.302 Processing file include/spdk/endian.h 00:08:38.302 Processing file include/spdk/bdev_module.h 00:08:38.302 Processing file include/spdk/thread.h 00:08:38.302 Processing file include/spdk/trace.h 00:08:38.302 Processing file include/spdk/nvme.h 00:08:38.302 Processing file include/spdk/nvmf_transport.h 00:08:38.302 Processing file include/spdk/mmio.h 00:08:38.302 Processing file include/spdk/histogram_data.h 00:08:38.302 Processing file include/spdk/nvme_spec.h 00:08:38.302 Processing file include/spdk/base64.h 00:08:38.302 Processing file include/spdk/util.h 00:08:38.564 Processing file include/spdk_internal/nvme_tcp.h 00:08:38.564 Processing file include/spdk_internal/sgl.h 00:08:38.564 Processing file include/spdk_internal/sock.h 00:08:38.564 Processing file include/spdk_internal/utf.h 00:08:38.564 Processing file include/spdk_internal/rdma_utils.h 00:08:38.564 Processing file include/spdk_internal/virtio.h 00:08:38.564 Processing file lib/accel/accel_rpc.c 00:08:38.564 Processing file lib/accel/accel_sw.c 00:08:38.564 Processing file lib/accel/accel.c 00:08:38.822 Processing file lib/bdev/part.c 00:08:38.822 Processing file lib/bdev/bdev.c 00:08:38.822 Processing file lib/bdev/bdev_rpc.c 00:08:38.822 Processing file lib/bdev/scsi_nvme.c 00:08:38.822 Processing file lib/bdev/bdev_zone.c 00:08:39.387 Processing file lib/blob/zeroes.c 00:08:39.387 Processing file lib/blob/blobstore.h 00:08:39.387 Processing file lib/blob/blobstore.c 00:08:39.387 Processing file lib/blob/blob_bs_dev.c 00:08:39.387 Processing file lib/blob/request.c 00:08:39.387 Processing file lib/blobfs/blobfs.c 00:08:39.387 Processing file lib/blobfs/tree.c 00:08:39.387 Processing file lib/conf/conf.c 00:08:39.387 Processing file lib/dma/dma.c 00:08:39.646 Processing file lib/env_dpdk/pci_virtio.c 00:08:39.646 Processing file lib/env_dpdk/sigbus_handler.c 00:08:39.646 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:08:39.646 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:08:39.646 Processing file lib/env_dpdk/env.c 00:08:39.646 Processing file lib/env_dpdk/memory.c 00:08:39.646 Processing file lib/env_dpdk/pci_vmd.c 00:08:39.646 Processing file lib/env_dpdk/pci_event.c 00:08:39.646 Processing file lib/env_dpdk/init.c 00:08:39.646 Processing file lib/env_dpdk/pci.c 00:08:39.646 Processing file lib/env_dpdk/pci_idxd.c 00:08:39.646 Processing file lib/env_dpdk/threads.c 00:08:39.646 Processing file lib/env_dpdk/pci_ioat.c 00:08:39.646 Processing file lib/env_dpdk/pci_dpdk.c 00:08:39.904 Processing file lib/event/reactor.c 00:08:39.904 Processing file lib/event/app_rpc.c 00:08:39.904 Processing file lib/event/log_rpc.c 00:08:39.904 Processing file lib/event/app.c 00:08:39.904 Processing file lib/event/scheduler_static.c 00:08:40.470 Processing file lib/ftl/ftl_writer.h 00:08:40.471 Processing file lib/ftl/ftl_sb.c 00:08:40.471 Processing file lib/ftl/ftl_nv_cache.h 00:08:40.471 Processing file lib/ftl/ftl_init.c 00:08:40.471 Processing file lib/ftl/ftl_debug.h 00:08:40.471 Processing file lib/ftl/ftl_band_ops.c 00:08:40.471 Processing file lib/ftl/ftl_io.h 00:08:40.471 Processing file lib/ftl/ftl_p2l.c 00:08:40.471 Processing file lib/ftl/ftl_l2p.c 00:08:40.471 Processing file lib/ftl/ftl_nv_cache.c 00:08:40.471 Processing file lib/ftl/ftl_l2p_cache.c 00:08:40.471 Processing file lib/ftl/ftl_io.c 00:08:40.471 Processing file lib/ftl/ftl_l2p_flat.c 00:08:40.471 Processing file lib/ftl/ftl_debug.c 00:08:40.471 Processing file lib/ftl/ftl_writer.c 00:08:40.471 Processing file lib/ftl/ftl_layout.c 00:08:40.471 Processing file lib/ftl/ftl_nv_cache_io.h 00:08:40.471 Processing file lib/ftl/ftl_core.h 00:08:40.471 Processing file lib/ftl/ftl_trace.c 00:08:40.471 Processing file lib/ftl/ftl_core.c 00:08:40.471 Processing file lib/ftl/ftl_reloc.c 00:08:40.471 Processing file lib/ftl/ftl_band.h 00:08:40.471 Processing file lib/ftl/ftl_rq.c 00:08:40.471 Processing file lib/ftl/ftl_band.c 00:08:40.471 Processing file lib/ftl/base/ftl_base_dev.c 00:08:40.471 Processing file lib/ftl/base/ftl_base_bdev.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:08:40.729 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:08:40.729 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:08:40.729 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:08:40.988 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:08:40.988 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:08:40.988 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:08:40.988 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:08:40.988 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:08:40.988 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:08:40.988 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:08:40.988 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:08:41.247 Processing file lib/ftl/utils/ftl_md.c 00:08:41.247 Processing file lib/ftl/utils/ftl_property.h 00:08:41.247 Processing file lib/ftl/utils/ftl_mempool.c 00:08:41.247 Processing file lib/ftl/utils/ftl_df.h 00:08:41.247 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:08:41.247 Processing file lib/ftl/utils/ftl_conf.c 00:08:41.247 Processing file lib/ftl/utils/ftl_addr_utils.h 00:08:41.247 Processing file lib/ftl/utils/ftl_property.c 00:08:41.247 Processing file lib/ftl/utils/ftl_bitmap.c 00:08:41.247 Processing file lib/idxd/idxd_internal.h 00:08:41.247 Processing file lib/idxd/idxd_user.c 00:08:41.247 Processing file lib/idxd/idxd.c 00:08:41.505 Processing file lib/init/rpc.c 00:08:41.505 Processing file lib/init/json_config.c 00:08:41.505 Processing file lib/init/subsystem.c 00:08:41.505 Processing file lib/init/subsystem_rpc.c 00:08:41.505 Processing file lib/ioat/ioat_internal.h 00:08:41.505 Processing file lib/ioat/ioat.c 00:08:42.072 Processing file lib/iscsi/iscsi_subsystem.c 00:08:42.072 Processing file lib/iscsi/portal_grp.c 00:08:42.072 Processing file lib/iscsi/iscsi.h 00:08:42.072 Processing file lib/iscsi/iscsi.c 00:08:42.072 Processing file lib/iscsi/task.c 00:08:42.072 Processing file lib/iscsi/iscsi_rpc.c 00:08:42.072 Processing file lib/iscsi/tgt_node.c 00:08:42.072 Processing file lib/iscsi/conn.c 00:08:42.072 Processing file lib/iscsi/task.h 00:08:42.072 Processing file lib/iscsi/md5.c 00:08:42.072 Processing file lib/iscsi/init_grp.c 00:08:42.072 Processing file lib/iscsi/param.c 00:08:42.072 Processing file lib/json/json_parse.c 00:08:42.072 Processing file lib/json/json_write.c 00:08:42.072 Processing file lib/json/json_util.c 00:08:42.331 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:08:42.331 Processing file lib/jsonrpc/jsonrpc_client.c 00:08:42.331 Processing file lib/jsonrpc/jsonrpc_server.c 00:08:42.331 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:08:42.331 Processing file lib/keyring/keyring_rpc.c 00:08:42.331 Processing file lib/keyring/keyring.c 00:08:42.331 Processing file lib/log/log_deprecated.c 00:08:42.331 Processing file lib/log/log_flags.c 00:08:42.331 Processing file lib/log/log.c 00:08:42.590 Processing file lib/lvol/lvol.c 00:08:42.590 Processing file lib/nbd/nbd.c 00:08:42.590 Processing file lib/nbd/nbd_rpc.c 00:08:42.848 Processing file lib/notify/notify.c 00:08:42.848 Processing file lib/notify/notify_rpc.c 00:08:43.413 Processing file lib/nvme/nvme_transport.c 00:08:43.413 Processing file lib/nvme/nvme.c 00:08:43.413 Processing file lib/nvme/nvme_pcie_common.c 00:08:43.413 Processing file lib/nvme/nvme_cuse.c 00:08:43.413 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:08:43.413 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:08:43.413 Processing file lib/nvme/nvme_poll_group.c 00:08:43.413 Processing file lib/nvme/nvme_quirks.c 00:08:43.413 Processing file lib/nvme/nvme_zns.c 00:08:43.413 Processing file lib/nvme/nvme_qpair.c 00:08:43.413 Processing file lib/nvme/nvme_internal.h 00:08:43.413 Processing file lib/nvme/nvme_stubs.c 00:08:43.413 Processing file lib/nvme/nvme_pcie_internal.h 00:08:43.413 Processing file lib/nvme/nvme_rdma.c 00:08:43.413 Processing file lib/nvme/nvme_discovery.c 00:08:43.413 Processing file lib/nvme/nvme_ns.c 00:08:43.413 Processing file lib/nvme/nvme_pcie.c 00:08:43.413 Processing file lib/nvme/nvme_auth.c 00:08:43.413 Processing file lib/nvme/nvme_io_msg.c 00:08:43.413 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:08:43.413 Processing file lib/nvme/nvme_ctrlr.c 00:08:43.413 Processing file lib/nvme/nvme_opal.c 00:08:43.413 Processing file lib/nvme/nvme_ns_cmd.c 00:08:43.413 Processing file lib/nvme/nvme_fabric.c 00:08:43.413 Processing file lib/nvme/nvme_tcp.c 00:08:43.979 Processing file lib/nvmf/nvmf_rpc.c 00:08:43.979 Processing file lib/nvmf/nvmf_internal.h 00:08:43.979 Processing file lib/nvmf/nvmf.c 00:08:43.979 Processing file lib/nvmf/ctrlr.c 00:08:43.979 Processing file lib/nvmf/subsystem.c 00:08:43.979 Processing file lib/nvmf/auth.c 00:08:43.979 Processing file lib/nvmf/ctrlr_bdev.c 00:08:43.979 Processing file lib/nvmf/tcp.c 00:08:43.979 Processing file lib/nvmf/transport.c 00:08:43.979 Processing file lib/nvmf/stubs.c 00:08:43.979 Processing file lib/nvmf/rdma.c 00:08:43.979 Processing file lib/nvmf/ctrlr_discovery.c 00:08:44.237 Processing file lib/rdma_provider/common.c 00:08:44.237 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:08:44.237 Processing file lib/rdma_utils/rdma_utils.c 00:08:44.237 Processing file lib/rpc/rpc.c 00:08:44.495 Processing file lib/scsi/dev.c 00:08:44.495 Processing file lib/scsi/scsi_pr.c 00:08:44.495 Processing file lib/scsi/task.c 00:08:44.495 Processing file lib/scsi/port.c 00:08:44.495 Processing file lib/scsi/scsi.c 00:08:44.495 Processing file lib/scsi/lun.c 00:08:44.495 Processing file lib/scsi/scsi_rpc.c 00:08:44.495 Processing file lib/scsi/scsi_bdev.c 00:08:44.752 Processing file lib/sock/sock.c 00:08:44.752 Processing file lib/sock/sock_rpc.c 00:08:44.752 Processing file lib/thread/iobuf.c 00:08:44.752 Processing file lib/thread/thread.c 00:08:44.752 Processing file lib/trace/trace.c 00:08:44.752 Processing file lib/trace/trace_rpc.c 00:08:44.752 Processing file lib/trace/trace_flags.c 00:08:45.011 Processing file lib/trace_parser/trace.cpp 00:08:45.011 Processing file lib/ut/ut.c 00:08:45.011 Processing file lib/ut_mock/mock.c 00:08:45.599 Processing file lib/util/xor.c 00:08:45.599 Processing file lib/util/crc16.c 00:08:45.599 Processing file lib/util/fd.c 00:08:45.599 Processing file lib/util/string.c 00:08:45.599 Processing file lib/util/pipe.c 00:08:45.599 Processing file lib/util/crc32_ieee.c 00:08:45.599 Processing file lib/util/crc64.c 00:08:45.599 Processing file lib/util/uuid.c 00:08:45.599 Processing file lib/util/hexlify.c 00:08:45.599 Processing file lib/util/file.c 00:08:45.599 Processing file lib/util/math.c 00:08:45.599 Processing file lib/util/cpuset.c 00:08:45.599 Processing file lib/util/zipf.c 00:08:45.599 Processing file lib/util/strerror_tls.c 00:08:45.599 Processing file lib/util/iov.c 00:08:45.599 Processing file lib/util/fd_group.c 00:08:45.599 Processing file lib/util/crc32c.c 00:08:45.599 Processing file lib/util/bit_array.c 00:08:45.600 Processing file lib/util/crc32.c 00:08:45.600 Processing file lib/util/dif.c 00:08:45.600 Processing file lib/util/base64.c 00:08:45.600 Processing file lib/vfio_user/host/vfio_user.c 00:08:45.600 Processing file lib/vfio_user/host/vfio_user_pci.c 00:08:45.858 Processing file lib/vhost/vhost.c 00:08:45.858 Processing file lib/vhost/vhost_blk.c 00:08:45.858 Processing file lib/vhost/vhost_rpc.c 00:08:45.858 Processing file lib/vhost/rte_vhost_user.c 00:08:45.858 Processing file lib/vhost/vhost_internal.h 00:08:45.858 Processing file lib/vhost/vhost_scsi.c 00:08:45.858 Processing file lib/virtio/virtio_vhost_user.c 00:08:45.858 Processing file lib/virtio/virtio_vfio_user.c 00:08:45.858 Processing file lib/virtio/virtio_pci.c 00:08:45.858 Processing file lib/virtio/virtio.c 00:08:46.116 Processing file lib/vmd/vmd.c 00:08:46.116 Processing file lib/vmd/led.c 00:08:46.116 Processing file module/accel/dsa/accel_dsa_rpc.c 00:08:46.116 Processing file module/accel/dsa/accel_dsa.c 00:08:46.116 Processing file module/accel/error/accel_error.c 00:08:46.116 Processing file module/accel/error/accel_error_rpc.c 00:08:46.374 Processing file module/accel/iaa/accel_iaa.c 00:08:46.374 Processing file module/accel/iaa/accel_iaa_rpc.c 00:08:46.374 Processing file module/accel/ioat/accel_ioat_rpc.c 00:08:46.374 Processing file module/accel/ioat/accel_ioat.c 00:08:46.374 Processing file module/bdev/aio/bdev_aio_rpc.c 00:08:46.374 Processing file module/bdev/aio/bdev_aio.c 00:08:46.631 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:08:46.631 Processing file module/bdev/delay/vbdev_delay.c 00:08:46.631 Processing file module/bdev/error/vbdev_error.c 00:08:46.631 Processing file module/bdev/error/vbdev_error_rpc.c 00:08:46.888 Processing file module/bdev/ftl/bdev_ftl.c 00:08:46.888 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:08:46.888 Processing file module/bdev/gpt/vbdev_gpt.c 00:08:46.888 Processing file module/bdev/gpt/gpt.h 00:08:46.888 Processing file module/bdev/gpt/gpt.c 00:08:46.888 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:08:46.888 Processing file module/bdev/iscsi/bdev_iscsi.c 00:08:47.145 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:08:47.145 Processing file module/bdev/lvol/vbdev_lvol.c 00:08:47.145 Processing file module/bdev/malloc/bdev_malloc.c 00:08:47.145 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:08:47.145 Processing file module/bdev/null/bdev_null_rpc.c 00:08:47.145 Processing file module/bdev/null/bdev_null.c 00:08:47.712 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:08:47.712 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:08:47.712 Processing file module/bdev/nvme/nvme_rpc.c 00:08:47.712 Processing file module/bdev/nvme/vbdev_opal.c 00:08:47.712 Processing file module/bdev/nvme/bdev_mdns_client.c 00:08:47.712 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:08:47.712 Processing file module/bdev/nvme/bdev_nvme.c 00:08:47.712 Processing file module/bdev/passthru/vbdev_passthru.c 00:08:47.712 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:08:47.970 Processing file module/bdev/raid/bdev_raid.c 00:08:47.970 Processing file module/bdev/raid/bdev_raid_sb.c 00:08:47.970 Processing file module/bdev/raid/bdev_raid.h 00:08:47.970 Processing file module/bdev/raid/concat.c 00:08:47.970 Processing file module/bdev/raid/raid1.c 00:08:47.970 Processing file module/bdev/raid/raid5f.c 00:08:47.970 Processing file module/bdev/raid/bdev_raid_rpc.c 00:08:47.970 Processing file module/bdev/raid/raid0.c 00:08:47.970 Processing file module/bdev/split/vbdev_split.c 00:08:47.970 Processing file module/bdev/split/vbdev_split_rpc.c 00:08:48.229 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:08:48.229 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:08:48.229 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:08:48.229 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:08:48.229 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:08:48.229 Processing file module/blob/bdev/blob_bdev.c 00:08:48.229 Processing file module/blobfs/bdev/blobfs_bdev.c 00:08:48.229 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:08:48.486 Processing file module/env_dpdk/env_dpdk_rpc.c 00:08:48.486 Processing file module/event/subsystems/accel/accel.c 00:08:48.486 Processing file module/event/subsystems/bdev/bdev.c 00:08:48.486 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:08:48.486 Processing file module/event/subsystems/iobuf/iobuf.c 00:08:48.744 Processing file module/event/subsystems/iscsi/iscsi.c 00:08:48.744 Processing file module/event/subsystems/keyring/keyring.c 00:08:48.744 Processing file module/event/subsystems/nbd/nbd.c 00:08:48.744 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:08:48.744 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:08:49.002 Processing file module/event/subsystems/scheduler/scheduler.c 00:08:49.002 Processing file module/event/subsystems/scsi/scsi.c 00:08:49.002 Processing file module/event/subsystems/sock/sock.c 00:08:49.002 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:08:49.002 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:08:49.261 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:08:49.261 Processing file module/event/subsystems/vmd/vmd.c 00:08:49.261 Processing file module/keyring/file/keyring_rpc.c 00:08:49.261 Processing file module/keyring/file/keyring.c 00:08:49.261 Processing file module/keyring/linux/keyring.c 00:08:49.261 Processing file module/keyring/linux/keyring_rpc.c 00:08:49.261 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:08:49.519 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:08:49.520 Processing file module/scheduler/gscheduler/gscheduler.c 00:08:49.520 Processing file module/sock/sock_kernel.h 00:08:49.520 Processing file module/sock/posix/posix.c 00:08:49.520 Writing directory view page. 00:08:49.520 Overall coverage rate: 00:08:49.520 lines......: 38.9% (40908 of 105101 lines) 00:08:49.520 functions..: 42.4% (3727 of 8788 functions) 00:08:49.520 00:08:49.520 00:08:49.520 ===================== 00:08:49.520 All unit tests passed 00:08:49.520 ===================== 00:08:49.520 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:08:49.520 11:20:24 unittest -- unit/unittest.sh@305 -- # set +x 00:08:49.520 00:08:49.520 00:08:49.520 00:08:49.520 real 3m46.564s 00:08:49.520 user 3m16.237s 00:08:49.520 sys 0m18.247s 00:08:49.520 11:20:24 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.520 11:20:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:49.520 ************************************ 00:08:49.520 END TEST unittest 00:08:49.520 ************************************ 00:08:49.778 11:20:24 -- common/autotest_common.sh@1142 -- # return 0 00:08:49.778 11:20:24 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:08:49.778 11:20:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:49.778 11:20:24 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:08:49.778 11:20:24 -- spdk/autotest.sh@162 -- # timing_enter lib 00:08:49.778 11:20:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.778 11:20:24 -- common/autotest_common.sh@10 -- # set +x 00:08:49.778 11:20:24 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:08:49.778 11:20:24 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:49.778 11:20:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:49.778 11:20:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.778 11:20:24 -- common/autotest_common.sh@10 -- # set +x 00:08:49.778 ************************************ 00:08:49.778 START TEST env 00:08:49.778 ************************************ 00:08:49.778 11:20:24 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:08:49.778 * Looking for test storage... 00:08:49.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:08:49.778 11:20:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:49.778 11:20:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:49.778 11:20:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.778 11:20:24 env -- common/autotest_common.sh@10 -- # set +x 00:08:49.778 ************************************ 00:08:49.778 START TEST env_memory 00:08:49.778 ************************************ 00:08:49.778 11:20:24 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:08:49.778 00:08:49.778 00:08:49.778 CUnit - A unit testing framework for C - Version 2.1-3 00:08:49.778 http://cunit.sourceforge.net/ 00:08:49.778 00:08:49.778 00:08:49.778 Suite: memory 00:08:49.778 Test: alloc and free memory map ...[2024-07-13 11:20:24.438354] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:49.778 passed 00:08:49.778 Test: mem map translation ...[2024-07-13 11:20:24.485120] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:49.778 [2024-07-13 11:20:24.485223] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:49.778 [2024-07-13 11:20:24.485326] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:49.778 [2024-07-13 11:20:24.485399] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:50.035 passed 00:08:50.035 Test: mem map registration ...[2024-07-13 11:20:24.569148] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:08:50.035 [2024-07-13 11:20:24.569248] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:08:50.035 passed 00:08:50.035 Test: mem map adjacent registrations ...passed 00:08:50.035 00:08:50.035 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.035 suites 1 1 n/a 0 0 00:08:50.035 tests 4 4 4 0 0 00:08:50.035 asserts 152 152 152 0 n/a 00:08:50.035 00:08:50.035 Elapsed time = 0.287 seconds 00:08:50.035 00:08:50.035 real 0m0.324s 00:08:50.035 user 0m0.296s 00:08:50.035 sys 0m0.028s 00:08:50.035 11:20:24 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.035 ************************************ 00:08:50.035 END TEST env_memory 00:08:50.035 ************************************ 00:08:50.035 11:20:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:50.035 11:20:24 env -- common/autotest_common.sh@1142 -- # return 0 00:08:50.035 11:20:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:50.035 11:20:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:50.035 11:20:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.035 11:20:24 env -- common/autotest_common.sh@10 -- # set +x 00:08:50.035 ************************************ 00:08:50.035 START TEST env_vtophys 00:08:50.035 ************************************ 00:08:50.035 11:20:24 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:08:50.293 EAL: lib.eal log level changed from notice to debug 00:08:50.293 EAL: Detected lcore 0 as core 0 on socket 0 00:08:50.293 EAL: Detected lcore 1 as core 0 on socket 0 00:08:50.293 EAL: Detected lcore 2 as core 0 on socket 0 00:08:50.293 EAL: Detected lcore 3 as core 0 on socket 0 00:08:50.293 EAL: Detected lcore 4 as core 0 on socket 0 00:08:50.293 EAL: Detected lcore 5 as core 0 on socket 0 00:08:50.293 EAL: Detected lcore 6 as core 0 on socket 0 00:08:50.293 EAL: Detected lcore 7 as core 0 on socket 0 00:08:50.293 EAL: Detected lcore 8 as core 0 on socket 0 00:08:50.293 EAL: Detected lcore 9 as core 0 on socket 0 00:08:50.293 EAL: Maximum logical cores by configuration: 128 00:08:50.293 EAL: Detected CPU lcores: 10 00:08:50.293 EAL: Detected NUMA nodes: 1 00:08:50.293 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:50.293 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:50.293 EAL: Checking presence of .so 'librte_eal.so' 00:08:50.293 EAL: Detected static linkage of DPDK 00:08:50.293 EAL: No shared files mode enabled, IPC will be disabled 00:08:50.293 EAL: Selected IOVA mode 'PA' 00:08:50.293 EAL: Probing VFIO support... 00:08:50.293 EAL: IOMMU type 1 (Type 1) is supported 00:08:50.293 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:50.293 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:50.293 EAL: VFIO support initialized 00:08:50.293 EAL: Ask a virtual area of 0x2e000 bytes 00:08:50.293 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:50.293 EAL: Setting up physically contiguous memory... 00:08:50.293 EAL: Setting maximum number of open files to 1048576 00:08:50.293 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:50.293 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:50.293 EAL: Ask a virtual area of 0x61000 bytes 00:08:50.293 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:50.293 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:50.293 EAL: Ask a virtual area of 0x400000000 bytes 00:08:50.293 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:50.293 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:50.293 EAL: Ask a virtual area of 0x61000 bytes 00:08:50.293 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:50.293 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:50.293 EAL: Ask a virtual area of 0x400000000 bytes 00:08:50.293 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:50.293 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:50.293 EAL: Ask a virtual area of 0x61000 bytes 00:08:50.293 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:50.293 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:50.293 EAL: Ask a virtual area of 0x400000000 bytes 00:08:50.293 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:50.293 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:50.293 EAL: Ask a virtual area of 0x61000 bytes 00:08:50.293 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:50.293 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:50.293 EAL: Ask a virtual area of 0x400000000 bytes 00:08:50.293 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:50.293 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:50.293 EAL: Hugepages will be freed exactly as allocated. 00:08:50.293 EAL: No shared files mode enabled, IPC is disabled 00:08:50.293 EAL: No shared files mode enabled, IPC is disabled 00:08:50.293 EAL: TSC frequency is ~2200000 KHz 00:08:50.293 EAL: Main lcore 0 is ready (tid=7f792e9efa40;cpuset=[0]) 00:08:50.293 EAL: Trying to obtain current memory policy. 00:08:50.293 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:50.293 EAL: Restoring previous memory policy: 0 00:08:50.293 EAL: request: mp_malloc_sync 00:08:50.293 EAL: No shared files mode enabled, IPC is disabled 00:08:50.293 EAL: Heap on socket 0 was expanded by 2MB 00:08:50.293 EAL: No shared files mode enabled, IPC is disabled 00:08:50.293 EAL: Mem event callback 'spdk:(nil)' registered 00:08:50.293 00:08:50.293 00:08:50.293 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.294 http://cunit.sourceforge.net/ 00:08:50.294 00:08:50.294 00:08:50.294 Suite: components_suite 00:08:50.861 Test: vtophys_malloc_test ...passed 00:08:50.861 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:50.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:50.861 EAL: Restoring previous memory policy: 0 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was expanded by 4MB 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was shrunk by 4MB 00:08:50.861 EAL: Trying to obtain current memory policy. 00:08:50.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:50.861 EAL: Restoring previous memory policy: 0 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was expanded by 6MB 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was shrunk by 6MB 00:08:50.861 EAL: Trying to obtain current memory policy. 00:08:50.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:50.861 EAL: Restoring previous memory policy: 0 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was expanded by 10MB 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was shrunk by 10MB 00:08:50.861 EAL: Trying to obtain current memory policy. 00:08:50.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:50.861 EAL: Restoring previous memory policy: 0 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was expanded by 18MB 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was shrunk by 18MB 00:08:50.861 EAL: Trying to obtain current memory policy. 00:08:50.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:50.861 EAL: Restoring previous memory policy: 0 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was expanded by 34MB 00:08:50.861 EAL: Calling mem event callback 'spdk:(nil)' 00:08:50.861 EAL: request: mp_malloc_sync 00:08:50.861 EAL: No shared files mode enabled, IPC is disabled 00:08:50.861 EAL: Heap on socket 0 was shrunk by 34MB 00:08:51.120 EAL: Trying to obtain current memory policy. 00:08:51.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.120 EAL: Restoring previous memory policy: 0 00:08:51.120 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.120 EAL: request: mp_malloc_sync 00:08:51.120 EAL: No shared files mode enabled, IPC is disabled 00:08:51.120 EAL: Heap on socket 0 was expanded by 66MB 00:08:51.120 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.120 EAL: request: mp_malloc_sync 00:08:51.120 EAL: No shared files mode enabled, IPC is disabled 00:08:51.120 EAL: Heap on socket 0 was shrunk by 66MB 00:08:51.120 EAL: Trying to obtain current memory policy. 00:08:51.120 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.378 EAL: Restoring previous memory policy: 0 00:08:51.378 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.378 EAL: request: mp_malloc_sync 00:08:51.378 EAL: No shared files mode enabled, IPC is disabled 00:08:51.378 EAL: Heap on socket 0 was expanded by 130MB 00:08:51.378 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.378 EAL: request: mp_malloc_sync 00:08:51.378 EAL: No shared files mode enabled, IPC is disabled 00:08:51.378 EAL: Heap on socket 0 was shrunk by 130MB 00:08:51.637 EAL: Trying to obtain current memory policy. 00:08:51.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.637 EAL: Restoring previous memory policy: 0 00:08:51.637 EAL: Calling mem event callback 'spdk:(nil)' 00:08:51.637 EAL: request: mp_malloc_sync 00:08:51.637 EAL: No shared files mode enabled, IPC is disabled 00:08:51.637 EAL: Heap on socket 0 was expanded by 258MB 00:08:52.205 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.205 EAL: request: mp_malloc_sync 00:08:52.205 EAL: No shared files mode enabled, IPC is disabled 00:08:52.205 EAL: Heap on socket 0 was shrunk by 258MB 00:08:52.463 EAL: Trying to obtain current memory policy. 00:08:52.463 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.723 EAL: Restoring previous memory policy: 0 00:08:52.723 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.723 EAL: request: mp_malloc_sync 00:08:52.723 EAL: No shared files mode enabled, IPC is disabled 00:08:52.723 EAL: Heap on socket 0 was expanded by 514MB 00:08:53.666 EAL: Calling mem event callback 'spdk:(nil)' 00:08:53.666 EAL: request: mp_malloc_sync 00:08:53.666 EAL: No shared files mode enabled, IPC is disabled 00:08:53.666 EAL: Heap on socket 0 was shrunk by 514MB 00:08:54.230 EAL: Trying to obtain current memory policy. 00:08:54.230 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:54.489 EAL: Restoring previous memory policy: 0 00:08:54.489 EAL: Calling mem event callback 'spdk:(nil)' 00:08:54.489 EAL: request: mp_malloc_sync 00:08:54.489 EAL: No shared files mode enabled, IPC is disabled 00:08:54.489 EAL: Heap on socket 0 was expanded by 1026MB 00:08:55.863 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.121 EAL: request: mp_malloc_sync 00:08:56.121 EAL: No shared files mode enabled, IPC is disabled 00:08:56.121 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:57.497 passed 00:08:57.497 00:08:57.497 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.497 suites 1 1 n/a 0 0 00:08:57.497 tests 2 2 2 0 0 00:08:57.497 asserts 6545 6545 6545 0 n/a 00:08:57.497 00:08:57.497 Elapsed time = 6.849 seconds 00:08:57.497 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.497 EAL: request: mp_malloc_sync 00:08:57.497 EAL: No shared files mode enabled, IPC is disabled 00:08:57.497 EAL: Heap on socket 0 was shrunk by 2MB 00:08:57.497 EAL: No shared files mode enabled, IPC is disabled 00:08:57.497 EAL: No shared files mode enabled, IPC is disabled 00:08:57.497 EAL: No shared files mode enabled, IPC is disabled 00:08:57.497 00:08:57.497 real 0m7.162s 00:08:57.497 user 0m5.910s 00:08:57.497 sys 0m1.118s 00:08:57.497 11:20:31 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.497 11:20:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:57.497 ************************************ 00:08:57.497 END TEST env_vtophys 00:08:57.497 ************************************ 00:08:57.497 11:20:31 env -- common/autotest_common.sh@1142 -- # return 0 00:08:57.497 11:20:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:57.497 11:20:31 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:57.497 11:20:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.497 11:20:31 env -- common/autotest_common.sh@10 -- # set +x 00:08:57.497 ************************************ 00:08:57.497 START TEST env_pci 00:08:57.497 ************************************ 00:08:57.497 11:20:31 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:08:57.497 00:08:57.497 00:08:57.497 CUnit - A unit testing framework for C - Version 2.1-3 00:08:57.497 http://cunit.sourceforge.net/ 00:08:57.497 00:08:57.497 00:08:57.497 Suite: pci 00:08:57.497 Test: pci_hook ...[2024-07-13 11:20:32.004650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 110750 has claimed it 00:08:57.497 passed 00:08:57.497 00:08:57.497 EAL: Cannot find device (10000:00:01.0) 00:08:57.497 EAL: Failed to attach device on primary process 00:08:57.497 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.497 suites 1 1 n/a 0 0 00:08:57.497 tests 1 1 1 0 0 00:08:57.497 asserts 25 25 25 0 n/a 00:08:57.497 00:08:57.497 Elapsed time = 0.007 seconds 00:08:57.497 00:08:57.497 real 0m0.092s 00:08:57.497 user 0m0.056s 00:08:57.497 sys 0m0.036s 00:08:57.497 11:20:32 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.497 ************************************ 00:08:57.497 END TEST env_pci 00:08:57.497 ************************************ 00:08:57.497 11:20:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:57.497 11:20:32 env -- common/autotest_common.sh@1142 -- # return 0 00:08:57.497 11:20:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:57.497 11:20:32 env -- env/env.sh@15 -- # uname 00:08:57.497 11:20:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:57.497 11:20:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:57.497 11:20:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:57.497 11:20:32 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:57.497 11:20:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.497 11:20:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:57.497 ************************************ 00:08:57.497 START TEST env_dpdk_post_init 00:08:57.497 ************************************ 00:08:57.497 11:20:32 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:57.497 EAL: Detected CPU lcores: 10 00:08:57.497 EAL: Detected NUMA nodes: 1 00:08:57.497 EAL: Detected static linkage of DPDK 00:08:57.497 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:57.497 EAL: Selected IOVA mode 'PA' 00:08:57.497 EAL: VFIO support initialized 00:08:57.755 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:57.755 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:08:57.755 Starting DPDK initialization... 00:08:57.755 Starting SPDK post initialization... 00:08:57.755 SPDK NVMe probe 00:08:57.755 Attaching to 0000:00:10.0 00:08:57.755 Attached to 0000:00:10.0 00:08:57.755 Cleaning up... 00:08:57.755 00:08:57.755 real 0m0.277s 00:08:57.755 user 0m0.068s 00:08:57.755 sys 0m0.110s 00:08:57.755 11:20:32 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.755 11:20:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:57.755 ************************************ 00:08:57.755 END TEST env_dpdk_post_init 00:08:57.755 ************************************ 00:08:57.755 11:20:32 env -- common/autotest_common.sh@1142 -- # return 0 00:08:57.755 11:20:32 env -- env/env.sh@26 -- # uname 00:08:57.755 11:20:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:57.756 11:20:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:57.756 11:20:32 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:57.756 11:20:32 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.756 11:20:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:57.756 ************************************ 00:08:57.756 START TEST env_mem_callbacks 00:08:57.756 ************************************ 00:08:57.756 11:20:32 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:08:57.756 EAL: Detected CPU lcores: 10 00:08:57.756 EAL: Detected NUMA nodes: 1 00:08:57.756 EAL: Detected static linkage of DPDK 00:08:58.015 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:58.015 EAL: Selected IOVA mode 'PA' 00:08:58.015 EAL: VFIO support initialized 00:08:58.015 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:58.015 00:08:58.015 00:08:58.015 CUnit - A unit testing framework for C - Version 2.1-3 00:08:58.015 http://cunit.sourceforge.net/ 00:08:58.015 00:08:58.015 00:08:58.015 Suite: memory 00:08:58.015 Test: test ... 00:08:58.015 register 0x200000200000 2097152 00:08:58.015 malloc 3145728 00:08:58.015 register 0x200000400000 4194304 00:08:58.015 buf 0x2000004fffc0 len 3145728 PASSED 00:08:58.015 malloc 64 00:08:58.015 buf 0x2000004ffec0 len 64 PASSED 00:08:58.015 malloc 4194304 00:08:58.015 register 0x200000800000 6291456 00:08:58.015 buf 0x2000009fffc0 len 4194304 PASSED 00:08:58.015 free 0x2000004fffc0 3145728 00:08:58.015 free 0x2000004ffec0 64 00:08:58.015 unregister 0x200000400000 4194304 PASSED 00:08:58.015 free 0x2000009fffc0 4194304 00:08:58.015 unregister 0x200000800000 6291456 PASSED 00:08:58.015 malloc 8388608 00:08:58.015 register 0x200000400000 10485760 00:08:58.015 buf 0x2000005fffc0 len 8388608 PASSED 00:08:58.015 free 0x2000005fffc0 8388608 00:08:58.015 unregister 0x200000400000 10485760 PASSED 00:08:58.015 passed 00:08:58.015 00:08:58.015 Run Summary: Type Total Ran Passed Failed Inactive 00:08:58.015 suites 1 1 n/a 0 0 00:08:58.015 tests 1 1 1 0 0 00:08:58.015 asserts 15 15 15 0 n/a 00:08:58.015 00:08:58.015 Elapsed time = 0.045 seconds 00:08:58.015 00:08:58.015 real 0m0.272s 00:08:58.015 user 0m0.089s 00:08:58.015 sys 0m0.084s 00:08:58.015 11:20:32 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.015 11:20:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:58.015 ************************************ 00:08:58.015 END TEST env_mem_callbacks 00:08:58.015 ************************************ 00:08:58.015 11:20:32 env -- common/autotest_common.sh@1142 -- # return 0 00:08:58.015 00:08:58.015 real 0m8.462s 00:08:58.015 user 0m6.584s 00:08:58.015 sys 0m1.526s 00:08:58.015 ************************************ 00:08:58.015 END TEST env 00:08:58.015 ************************************ 00:08:58.015 11:20:32 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.015 11:20:32 env -- common/autotest_common.sh@10 -- # set +x 00:08:58.289 11:20:32 -- common/autotest_common.sh@1142 -- # return 0 00:08:58.290 11:20:32 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:58.290 11:20:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.290 11:20:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.290 11:20:32 -- common/autotest_common.sh@10 -- # set +x 00:08:58.290 ************************************ 00:08:58.290 START TEST rpc 00:08:58.290 ************************************ 00:08:58.290 11:20:32 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:08:58.290 * Looking for test storage... 00:08:58.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:08:58.290 11:20:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=110879 00:08:58.290 11:20:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:58.290 11:20:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:08:58.290 11:20:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 110879 00:08:58.290 11:20:32 rpc -- common/autotest_common.sh@829 -- # '[' -z 110879 ']' 00:08:58.290 11:20:32 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.290 11:20:32 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.290 11:20:32 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.290 11:20:32 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.290 11:20:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.290 [2024-07-13 11:20:32.975042] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:58.290 [2024-07-13 11:20:32.976029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110879 ] 00:08:58.558 [2024-07-13 11:20:33.132028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.817 [2024-07-13 11:20:33.320874] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:58.817 [2024-07-13 11:20:33.321184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 110879' to capture a snapshot of events at runtime. 00:08:58.817 [2024-07-13 11:20:33.321431] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.817 [2024-07-13 11:20:33.321564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.817 [2024-07-13 11:20:33.321612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid110879 for offline analysis/debug. 00:08:58.817 [2024-07-13 11:20:33.321805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.383 11:20:34 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.383 11:20:34 rpc -- common/autotest_common.sh@862 -- # return 0 00:08:59.383 11:20:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:59.383 11:20:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:08:59.383 11:20:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:59.383 11:20:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:59.383 11:20:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:59.383 11:20:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.383 11:20:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.383 ************************************ 00:08:59.383 START TEST rpc_integrity 00:08:59.383 ************************************ 00:08:59.383 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:08:59.383 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:59.383 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.383 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.383 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.383 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:59.383 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:59.641 { 00:08:59.641 "name": "Malloc0", 00:08:59.641 "aliases": [ 00:08:59.641 "7a209dc6-ad52-4eca-88fc-40f0cc4749dd" 00:08:59.641 ], 00:08:59.641 "product_name": "Malloc disk", 00:08:59.641 "block_size": 512, 00:08:59.641 "num_blocks": 16384, 00:08:59.641 "uuid": "7a209dc6-ad52-4eca-88fc-40f0cc4749dd", 00:08:59.641 "assigned_rate_limits": { 00:08:59.641 "rw_ios_per_sec": 0, 00:08:59.641 "rw_mbytes_per_sec": 0, 00:08:59.641 "r_mbytes_per_sec": 0, 00:08:59.641 "w_mbytes_per_sec": 0 00:08:59.641 }, 00:08:59.641 "claimed": false, 00:08:59.641 "zoned": false, 00:08:59.641 "supported_io_types": { 00:08:59.641 "read": true, 00:08:59.641 "write": true, 00:08:59.641 "unmap": true, 00:08:59.641 "flush": true, 00:08:59.641 "reset": true, 00:08:59.641 "nvme_admin": false, 00:08:59.641 "nvme_io": false, 00:08:59.641 "nvme_io_md": false, 00:08:59.641 "write_zeroes": true, 00:08:59.641 "zcopy": true, 00:08:59.641 "get_zone_info": false, 00:08:59.641 "zone_management": false, 00:08:59.641 "zone_append": false, 00:08:59.641 "compare": false, 00:08:59.641 "compare_and_write": false, 00:08:59.641 "abort": true, 00:08:59.641 "seek_hole": false, 00:08:59.641 "seek_data": false, 00:08:59.641 "copy": true, 00:08:59.641 "nvme_iov_md": false 00:08:59.641 }, 00:08:59.641 "memory_domains": [ 00:08:59.641 { 00:08:59.641 "dma_device_id": "system", 00:08:59.641 "dma_device_type": 1 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.641 "dma_device_type": 2 00:08:59.641 } 00:08:59.641 ], 00:08:59.641 "driver_specific": {} 00:08:59.641 } 00:08:59.641 ]' 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.641 [2024-07-13 11:20:34.255311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:59.641 [2024-07-13 11:20:34.255523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.641 [2024-07-13 11:20:34.255627] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:59.641 [2024-07-13 11:20:34.255837] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.641 [2024-07-13 11:20:34.258045] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.641 [2024-07-13 11:20:34.258187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:59.641 Passthru0 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.641 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.641 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:59.641 { 00:08:59.641 "name": "Malloc0", 00:08:59.641 "aliases": [ 00:08:59.641 "7a209dc6-ad52-4eca-88fc-40f0cc4749dd" 00:08:59.641 ], 00:08:59.641 "product_name": "Malloc disk", 00:08:59.641 "block_size": 512, 00:08:59.641 "num_blocks": 16384, 00:08:59.641 "uuid": "7a209dc6-ad52-4eca-88fc-40f0cc4749dd", 00:08:59.641 "assigned_rate_limits": { 00:08:59.641 "rw_ios_per_sec": 0, 00:08:59.641 "rw_mbytes_per_sec": 0, 00:08:59.641 "r_mbytes_per_sec": 0, 00:08:59.641 "w_mbytes_per_sec": 0 00:08:59.641 }, 00:08:59.641 "claimed": true, 00:08:59.641 "claim_type": "exclusive_write", 00:08:59.641 "zoned": false, 00:08:59.641 "supported_io_types": { 00:08:59.641 "read": true, 00:08:59.641 "write": true, 00:08:59.641 "unmap": true, 00:08:59.641 "flush": true, 00:08:59.641 "reset": true, 00:08:59.641 "nvme_admin": false, 00:08:59.641 "nvme_io": false, 00:08:59.641 "nvme_io_md": false, 00:08:59.641 "write_zeroes": true, 00:08:59.641 "zcopy": true, 00:08:59.641 "get_zone_info": false, 00:08:59.641 "zone_management": false, 00:08:59.641 "zone_append": false, 00:08:59.641 "compare": false, 00:08:59.641 "compare_and_write": false, 00:08:59.641 "abort": true, 00:08:59.641 "seek_hole": false, 00:08:59.641 "seek_data": false, 00:08:59.641 "copy": true, 00:08:59.641 "nvme_iov_md": false 00:08:59.641 }, 00:08:59.641 "memory_domains": [ 00:08:59.641 { 00:08:59.641 "dma_device_id": "system", 00:08:59.641 "dma_device_type": 1 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.641 "dma_device_type": 2 00:08:59.641 } 00:08:59.641 ], 00:08:59.641 "driver_specific": {} 00:08:59.641 }, 00:08:59.641 { 00:08:59.641 "name": "Passthru0", 00:08:59.641 "aliases": [ 00:08:59.641 "1e1d0a03-95a6-545c-a6db-39d49cc6ddb7" 00:08:59.641 ], 00:08:59.641 "product_name": "passthru", 00:08:59.641 "block_size": 512, 00:08:59.641 "num_blocks": 16384, 00:08:59.641 "uuid": "1e1d0a03-95a6-545c-a6db-39d49cc6ddb7", 00:08:59.641 "assigned_rate_limits": { 00:08:59.641 "rw_ios_per_sec": 0, 00:08:59.641 "rw_mbytes_per_sec": 0, 00:08:59.641 "r_mbytes_per_sec": 0, 00:08:59.641 "w_mbytes_per_sec": 0 00:08:59.641 }, 00:08:59.641 "claimed": false, 00:08:59.641 "zoned": false, 00:08:59.641 "supported_io_types": { 00:08:59.641 "read": true, 00:08:59.641 "write": true, 00:08:59.641 "unmap": true, 00:08:59.641 "flush": true, 00:08:59.641 "reset": true, 00:08:59.641 "nvme_admin": false, 00:08:59.641 "nvme_io": false, 00:08:59.641 "nvme_io_md": false, 00:08:59.641 "write_zeroes": true, 00:08:59.641 "zcopy": true, 00:08:59.641 "get_zone_info": false, 00:08:59.641 "zone_management": false, 00:08:59.641 "zone_append": false, 00:08:59.641 "compare": false, 00:08:59.641 "compare_and_write": false, 00:08:59.641 "abort": true, 00:08:59.641 "seek_hole": false, 00:08:59.641 "seek_data": false, 00:08:59.641 "copy": true, 00:08:59.641 "nvme_iov_md": false 00:08:59.641 }, 00:08:59.642 "memory_domains": [ 00:08:59.642 { 00:08:59.642 "dma_device_id": "system", 00:08:59.642 "dma_device_type": 1 00:08:59.642 }, 00:08:59.642 { 00:08:59.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.642 "dma_device_type": 2 00:08:59.642 } 00:08:59.642 ], 00:08:59.642 "driver_specific": { 00:08:59.642 "passthru": { 00:08:59.642 "name": "Passthru0", 00:08:59.642 "base_bdev_name": "Malloc0" 00:08:59.642 } 00:08:59.642 } 00:08:59.642 } 00:08:59.642 ]' 00:08:59.642 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:59.642 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:59.642 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:59.642 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.642 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.642 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.642 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:59.642 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.642 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.642 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.642 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:59.642 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.642 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.642 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.642 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:59.642 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:59.901 11:20:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:59.901 ************************************ 00:08:59.901 END TEST rpc_integrity 00:08:59.901 ************************************ 00:08:59.901 00:08:59.901 real 0m0.343s 00:08:59.901 user 0m0.227s 00:08:59.901 sys 0m0.030s 00:08:59.901 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.901 11:20:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 11:20:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:59.901 11:20:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:59.901 11:20:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:59.901 11:20:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.901 11:20:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 ************************************ 00:08:59.901 START TEST rpc_plugins 00:08:59.901 ************************************ 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:59.901 { 00:08:59.901 "name": "Malloc1", 00:08:59.901 "aliases": [ 00:08:59.901 "bdf21fc5-798d-4b05-8eb2-cda9305f4caf" 00:08:59.901 ], 00:08:59.901 "product_name": "Malloc disk", 00:08:59.901 "block_size": 4096, 00:08:59.901 "num_blocks": 256, 00:08:59.901 "uuid": "bdf21fc5-798d-4b05-8eb2-cda9305f4caf", 00:08:59.901 "assigned_rate_limits": { 00:08:59.901 "rw_ios_per_sec": 0, 00:08:59.901 "rw_mbytes_per_sec": 0, 00:08:59.901 "r_mbytes_per_sec": 0, 00:08:59.901 "w_mbytes_per_sec": 0 00:08:59.901 }, 00:08:59.901 "claimed": false, 00:08:59.901 "zoned": false, 00:08:59.901 "supported_io_types": { 00:08:59.901 "read": true, 00:08:59.901 "write": true, 00:08:59.901 "unmap": true, 00:08:59.901 "flush": true, 00:08:59.901 "reset": true, 00:08:59.901 "nvme_admin": false, 00:08:59.901 "nvme_io": false, 00:08:59.901 "nvme_io_md": false, 00:08:59.901 "write_zeroes": true, 00:08:59.901 "zcopy": true, 00:08:59.901 "get_zone_info": false, 00:08:59.901 "zone_management": false, 00:08:59.901 "zone_append": false, 00:08:59.901 "compare": false, 00:08:59.901 "compare_and_write": false, 00:08:59.901 "abort": true, 00:08:59.901 "seek_hole": false, 00:08:59.901 "seek_data": false, 00:08:59.901 "copy": true, 00:08:59.901 "nvme_iov_md": false 00:08:59.901 }, 00:08:59.901 "memory_domains": [ 00:08:59.901 { 00:08:59.901 "dma_device_id": "system", 00:08:59.901 "dma_device_type": 1 00:08:59.901 }, 00:08:59.901 { 00:08:59.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.901 "dma_device_type": 2 00:08:59.901 } 00:08:59.901 ], 00:08:59.901 "driver_specific": {} 00:08:59.901 } 00:08:59.901 ]' 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:59.901 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:59.901 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:00.160 11:20:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:00.160 00:09:00.160 real 0m0.165s 00:09:00.160 user 0m0.113s 00:09:00.160 sys 0m0.014s 00:09:00.160 ************************************ 00:09:00.160 END TEST rpc_plugins 00:09:00.160 ************************************ 00:09:00.160 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.160 11:20:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:00.160 11:20:34 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:00.160 11:20:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:00.160 11:20:34 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:00.160 11:20:34 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.160 11:20:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.160 ************************************ 00:09:00.160 START TEST rpc_trace_cmd_test 00:09:00.160 ************************************ 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:00.160 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid110879", 00:09:00.160 "tpoint_group_mask": "0x8", 00:09:00.160 "iscsi_conn": { 00:09:00.160 "mask": "0x2", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "scsi": { 00:09:00.160 "mask": "0x4", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "bdev": { 00:09:00.160 "mask": "0x8", 00:09:00.160 "tpoint_mask": "0xffffffffffffffff" 00:09:00.160 }, 00:09:00.160 "nvmf_rdma": { 00:09:00.160 "mask": "0x10", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "nvmf_tcp": { 00:09:00.160 "mask": "0x20", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "ftl": { 00:09:00.160 "mask": "0x40", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "blobfs": { 00:09:00.160 "mask": "0x80", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "dsa": { 00:09:00.160 "mask": "0x200", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "thread": { 00:09:00.160 "mask": "0x400", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "nvme_pcie": { 00:09:00.160 "mask": "0x800", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "iaa": { 00:09:00.160 "mask": "0x1000", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "nvme_tcp": { 00:09:00.160 "mask": "0x2000", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "bdev_nvme": { 00:09:00.160 "mask": "0x4000", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 }, 00:09:00.160 "sock": { 00:09:00.160 "mask": "0x8000", 00:09:00.160 "tpoint_mask": "0x0" 00:09:00.160 } 00:09:00.160 }' 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:00.160 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:00.419 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:00.419 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:00.419 11:20:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:00.419 ************************************ 00:09:00.419 END TEST rpc_trace_cmd_test 00:09:00.419 ************************************ 00:09:00.419 00:09:00.419 real 0m0.293s 00:09:00.419 user 0m0.276s 00:09:00.419 sys 0m0.012s 00:09:00.419 11:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.419 11:20:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.419 11:20:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:00.419 11:20:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:00.419 11:20:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:00.419 11:20:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:00.419 11:20:35 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:00.419 11:20:35 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.419 11:20:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.419 ************************************ 00:09:00.419 START TEST rpc_daemon_integrity 00:09:00.419 ************************************ 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.419 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:00.419 { 00:09:00.419 "name": "Malloc2", 00:09:00.419 "aliases": [ 00:09:00.419 "b060fd68-01f3-488e-a993-399cef2cde08" 00:09:00.419 ], 00:09:00.419 "product_name": "Malloc disk", 00:09:00.419 "block_size": 512, 00:09:00.419 "num_blocks": 16384, 00:09:00.419 "uuid": "b060fd68-01f3-488e-a993-399cef2cde08", 00:09:00.419 "assigned_rate_limits": { 00:09:00.419 "rw_ios_per_sec": 0, 00:09:00.419 "rw_mbytes_per_sec": 0, 00:09:00.419 "r_mbytes_per_sec": 0, 00:09:00.419 "w_mbytes_per_sec": 0 00:09:00.419 }, 00:09:00.419 "claimed": false, 00:09:00.419 "zoned": false, 00:09:00.419 "supported_io_types": { 00:09:00.419 "read": true, 00:09:00.419 "write": true, 00:09:00.419 "unmap": true, 00:09:00.419 "flush": true, 00:09:00.419 "reset": true, 00:09:00.419 "nvme_admin": false, 00:09:00.419 "nvme_io": false, 00:09:00.419 "nvme_io_md": false, 00:09:00.419 "write_zeroes": true, 00:09:00.419 "zcopy": true, 00:09:00.419 "get_zone_info": false, 00:09:00.419 "zone_management": false, 00:09:00.420 "zone_append": false, 00:09:00.420 "compare": false, 00:09:00.420 "compare_and_write": false, 00:09:00.420 "abort": true, 00:09:00.420 "seek_hole": false, 00:09:00.420 "seek_data": false, 00:09:00.420 "copy": true, 00:09:00.420 "nvme_iov_md": false 00:09:00.420 }, 00:09:00.420 "memory_domains": [ 00:09:00.420 { 00:09:00.420 "dma_device_id": "system", 00:09:00.420 "dma_device_type": 1 00:09:00.420 }, 00:09:00.420 { 00:09:00.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.420 "dma_device_type": 2 00:09:00.420 } 00:09:00.420 ], 00:09:00.420 "driver_specific": {} 00:09:00.420 } 00:09:00.420 ]' 00:09:00.420 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:00.678 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.679 [2024-07-13 11:20:35.203059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:00.679 [2024-07-13 11:20:35.203290] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.679 [2024-07-13 11:20:35.203484] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:00.679 [2024-07-13 11:20:35.203590] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.679 [2024-07-13 11:20:35.206104] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.679 [2024-07-13 11:20:35.206245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:00.679 Passthru0 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:00.679 { 00:09:00.679 "name": "Malloc2", 00:09:00.679 "aliases": [ 00:09:00.679 "b060fd68-01f3-488e-a993-399cef2cde08" 00:09:00.679 ], 00:09:00.679 "product_name": "Malloc disk", 00:09:00.679 "block_size": 512, 00:09:00.679 "num_blocks": 16384, 00:09:00.679 "uuid": "b060fd68-01f3-488e-a993-399cef2cde08", 00:09:00.679 "assigned_rate_limits": { 00:09:00.679 "rw_ios_per_sec": 0, 00:09:00.679 "rw_mbytes_per_sec": 0, 00:09:00.679 "r_mbytes_per_sec": 0, 00:09:00.679 "w_mbytes_per_sec": 0 00:09:00.679 }, 00:09:00.679 "claimed": true, 00:09:00.679 "claim_type": "exclusive_write", 00:09:00.679 "zoned": false, 00:09:00.679 "supported_io_types": { 00:09:00.679 "read": true, 00:09:00.679 "write": true, 00:09:00.679 "unmap": true, 00:09:00.679 "flush": true, 00:09:00.679 "reset": true, 00:09:00.679 "nvme_admin": false, 00:09:00.679 "nvme_io": false, 00:09:00.679 "nvme_io_md": false, 00:09:00.679 "write_zeroes": true, 00:09:00.679 "zcopy": true, 00:09:00.679 "get_zone_info": false, 00:09:00.679 "zone_management": false, 00:09:00.679 "zone_append": false, 00:09:00.679 "compare": false, 00:09:00.679 "compare_and_write": false, 00:09:00.679 "abort": true, 00:09:00.679 "seek_hole": false, 00:09:00.679 "seek_data": false, 00:09:00.679 "copy": true, 00:09:00.679 "nvme_iov_md": false 00:09:00.679 }, 00:09:00.679 "memory_domains": [ 00:09:00.679 { 00:09:00.679 "dma_device_id": "system", 00:09:00.679 "dma_device_type": 1 00:09:00.679 }, 00:09:00.679 { 00:09:00.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.679 "dma_device_type": 2 00:09:00.679 } 00:09:00.679 ], 00:09:00.679 "driver_specific": {} 00:09:00.679 }, 00:09:00.679 { 00:09:00.679 "name": "Passthru0", 00:09:00.679 "aliases": [ 00:09:00.679 "64b36e68-3a73-568d-813c-d30e07769168" 00:09:00.679 ], 00:09:00.679 "product_name": "passthru", 00:09:00.679 "block_size": 512, 00:09:00.679 "num_blocks": 16384, 00:09:00.679 "uuid": "64b36e68-3a73-568d-813c-d30e07769168", 00:09:00.679 "assigned_rate_limits": { 00:09:00.679 "rw_ios_per_sec": 0, 00:09:00.679 "rw_mbytes_per_sec": 0, 00:09:00.679 "r_mbytes_per_sec": 0, 00:09:00.679 "w_mbytes_per_sec": 0 00:09:00.679 }, 00:09:00.679 "claimed": false, 00:09:00.679 "zoned": false, 00:09:00.679 "supported_io_types": { 00:09:00.679 "read": true, 00:09:00.679 "write": true, 00:09:00.679 "unmap": true, 00:09:00.679 "flush": true, 00:09:00.679 "reset": true, 00:09:00.679 "nvme_admin": false, 00:09:00.679 "nvme_io": false, 00:09:00.679 "nvme_io_md": false, 00:09:00.679 "write_zeroes": true, 00:09:00.679 "zcopy": true, 00:09:00.679 "get_zone_info": false, 00:09:00.679 "zone_management": false, 00:09:00.679 "zone_append": false, 00:09:00.679 "compare": false, 00:09:00.679 "compare_and_write": false, 00:09:00.679 "abort": true, 00:09:00.679 "seek_hole": false, 00:09:00.679 "seek_data": false, 00:09:00.679 "copy": true, 00:09:00.679 "nvme_iov_md": false 00:09:00.679 }, 00:09:00.679 "memory_domains": [ 00:09:00.679 { 00:09:00.679 "dma_device_id": "system", 00:09:00.679 "dma_device_type": 1 00:09:00.679 }, 00:09:00.679 { 00:09:00.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.679 "dma_device_type": 2 00:09:00.679 } 00:09:00.679 ], 00:09:00.679 "driver_specific": { 00:09:00.679 "passthru": { 00:09:00.679 "name": "Passthru0", 00:09:00.679 "base_bdev_name": "Malloc2" 00:09:00.679 } 00:09:00.679 } 00:09:00.679 } 00:09:00.679 ]' 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:00.679 ************************************ 00:09:00.679 END TEST rpc_daemon_integrity 00:09:00.679 ************************************ 00:09:00.679 00:09:00.679 real 0m0.336s 00:09:00.679 user 0m0.225s 00:09:00.679 sys 0m0.032s 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.679 11:20:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.679 11:20:35 rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:00.680 11:20:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:00.680 11:20:35 rpc -- rpc/rpc.sh@84 -- # killprocess 110879 00:09:00.680 11:20:35 rpc -- common/autotest_common.sh@948 -- # '[' -z 110879 ']' 00:09:00.680 11:20:35 rpc -- common/autotest_common.sh@952 -- # kill -0 110879 00:09:00.680 11:20:35 rpc -- common/autotest_common.sh@953 -- # uname 00:09:00.680 11:20:35 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:00.680 11:20:35 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 110879 00:09:00.938 11:20:35 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:00.938 killing process with pid 110879 00:09:00.938 11:20:35 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:00.938 11:20:35 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 110879' 00:09:00.938 11:20:35 rpc -- common/autotest_common.sh@967 -- # kill 110879 00:09:00.938 11:20:35 rpc -- common/autotest_common.sh@972 -- # wait 110879 00:09:02.843 00:09:02.843 real 0m4.602s 00:09:02.843 user 0m5.306s 00:09:02.843 sys 0m0.805s 00:09:02.843 11:20:37 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.843 ************************************ 00:09:02.843 END TEST rpc 00:09:02.843 ************************************ 00:09:02.843 11:20:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.843 11:20:37 -- common/autotest_common.sh@1142 -- # return 0 00:09:02.843 11:20:37 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:02.843 11:20:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:02.843 11:20:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.843 11:20:37 -- common/autotest_common.sh@10 -- # set +x 00:09:02.843 ************************************ 00:09:02.843 START TEST skip_rpc 00:09:02.843 ************************************ 00:09:02.843 11:20:37 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:02.843 * Looking for test storage... 00:09:02.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:02.843 11:20:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:02.843 11:20:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:02.843 11:20:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:02.843 11:20:37 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:02.843 11:20:37 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.843 11:20:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.843 ************************************ 00:09:02.843 START TEST skip_rpc 00:09:02.843 ************************************ 00:09:02.843 11:20:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:09:02.843 11:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=111135 00:09:02.843 11:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:02.843 11:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:02.843 11:20:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:03.101 [2024-07-13 11:20:37.632787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:03.101 [2024-07-13 11:20:37.633258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111135 ] 00:09:03.101 [2024-07-13 11:20:37.800708] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.360 [2024-07-13 11:20:37.988051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 111135 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 111135 ']' 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 111135 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111135 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111135' 00:09:08.626 killing process with pid 111135 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 111135 00:09:08.626 11:20:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 111135 00:09:10.001 ************************************ 00:09:10.001 END TEST skip_rpc 00:09:10.001 ************************************ 00:09:10.001 00:09:10.001 real 0m7.084s 00:09:10.001 user 0m6.506s 00:09:10.001 sys 0m0.479s 00:09:10.001 11:20:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.001 11:20:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 11:20:44 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:10.001 11:20:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:10.001 11:20:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:10.001 11:20:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.001 11:20:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 ************************************ 00:09:10.001 START TEST skip_rpc_with_json 00:09:10.001 ************************************ 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=111260 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 111260 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 111260 ']' 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.001 11:20:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:10.260 [2024-07-13 11:20:44.776354] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:10.260 [2024-07-13 11:20:44.776624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111260 ] 00:09:10.260 [2024-07-13 11:20:44.948185] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.518 [2024-07-13 11:20:45.160311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.455 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:11.455 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:09:11.455 11:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:11.455 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.455 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:11.455 [2024-07-13 11:20:45.939718] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:11.455 request: 00:09:11.455 { 00:09:11.455 "trtype": "tcp", 00:09:11.455 "method": "nvmf_get_transports", 00:09:11.455 "req_id": 1 00:09:11.455 } 00:09:11.455 Got JSON-RPC error response 00:09:11.455 response: 00:09:11.455 { 00:09:11.455 "code": -19, 00:09:11.455 "message": "No such device" 00:09:11.455 } 00:09:11.455 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:11.455 11:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:11.456 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.456 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:11.456 [2024-07-13 11:20:45.951797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:11.456 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.456 11:20:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:11.456 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.456 11:20:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:11.456 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.456 11:20:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:11.456 { 00:09:11.456 "subsystems": [ 00:09:11.456 { 00:09:11.456 "subsystem": "scheduler", 00:09:11.456 "config": [ 00:09:11.456 { 00:09:11.456 "method": "framework_set_scheduler", 00:09:11.456 "params": { 00:09:11.456 "name": "static" 00:09:11.456 } 00:09:11.456 } 00:09:11.456 ] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "vmd", 00:09:11.456 "config": [] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "sock", 00:09:11.456 "config": [ 00:09:11.456 { 00:09:11.456 "method": "sock_set_default_impl", 00:09:11.456 "params": { 00:09:11.456 "impl_name": "posix" 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "sock_impl_set_options", 00:09:11.456 "params": { 00:09:11.456 "impl_name": "ssl", 00:09:11.456 "recv_buf_size": 4096, 00:09:11.456 "send_buf_size": 4096, 00:09:11.456 "enable_recv_pipe": true, 00:09:11.456 "enable_quickack": false, 00:09:11.456 "enable_placement_id": 0, 00:09:11.456 "enable_zerocopy_send_server": true, 00:09:11.456 "enable_zerocopy_send_client": false, 00:09:11.456 "zerocopy_threshold": 0, 00:09:11.456 "tls_version": 0, 00:09:11.456 "enable_ktls": false 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "sock_impl_set_options", 00:09:11.456 "params": { 00:09:11.456 "impl_name": "posix", 00:09:11.456 "recv_buf_size": 2097152, 00:09:11.456 "send_buf_size": 2097152, 00:09:11.456 "enable_recv_pipe": true, 00:09:11.456 "enable_quickack": false, 00:09:11.456 "enable_placement_id": 0, 00:09:11.456 "enable_zerocopy_send_server": true, 00:09:11.456 "enable_zerocopy_send_client": false, 00:09:11.456 "zerocopy_threshold": 0, 00:09:11.456 "tls_version": 0, 00:09:11.456 "enable_ktls": false 00:09:11.456 } 00:09:11.456 } 00:09:11.456 ] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "iobuf", 00:09:11.456 "config": [ 00:09:11.456 { 00:09:11.456 "method": "iobuf_set_options", 00:09:11.456 "params": { 00:09:11.456 "small_pool_count": 8192, 00:09:11.456 "large_pool_count": 1024, 00:09:11.456 "small_bufsize": 8192, 00:09:11.456 "large_bufsize": 135168 00:09:11.456 } 00:09:11.456 } 00:09:11.456 ] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "keyring", 00:09:11.456 "config": [] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "accel", 00:09:11.456 "config": [ 00:09:11.456 { 00:09:11.456 "method": "accel_set_options", 00:09:11.456 "params": { 00:09:11.456 "small_cache_size": 128, 00:09:11.456 "large_cache_size": 16, 00:09:11.456 "task_count": 2048, 00:09:11.456 "sequence_count": 2048, 00:09:11.456 "buf_count": 2048 00:09:11.456 } 00:09:11.456 } 00:09:11.456 ] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "bdev", 00:09:11.456 "config": [ 00:09:11.456 { 00:09:11.456 "method": "bdev_set_options", 00:09:11.456 "params": { 00:09:11.456 "bdev_io_pool_size": 65535, 00:09:11.456 "bdev_io_cache_size": 256, 00:09:11.456 "bdev_auto_examine": true, 00:09:11.456 "iobuf_small_cache_size": 128, 00:09:11.456 "iobuf_large_cache_size": 16 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "bdev_raid_set_options", 00:09:11.456 "params": { 00:09:11.456 "process_window_size_kb": 1024 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "bdev_nvme_set_options", 00:09:11.456 "params": { 00:09:11.456 "action_on_timeout": "none", 00:09:11.456 "timeout_us": 0, 00:09:11.456 "timeout_admin_us": 0, 00:09:11.456 "keep_alive_timeout_ms": 10000, 00:09:11.456 "arbitration_burst": 0, 00:09:11.456 "low_priority_weight": 0, 00:09:11.456 "medium_priority_weight": 0, 00:09:11.456 "high_priority_weight": 0, 00:09:11.456 "nvme_adminq_poll_period_us": 10000, 00:09:11.456 "nvme_ioq_poll_period_us": 0, 00:09:11.456 "io_queue_requests": 0, 00:09:11.456 "delay_cmd_submit": true, 00:09:11.456 "transport_retry_count": 4, 00:09:11.456 "bdev_retry_count": 3, 00:09:11.456 "transport_ack_timeout": 0, 00:09:11.456 "ctrlr_loss_timeout_sec": 0, 00:09:11.456 "reconnect_delay_sec": 0, 00:09:11.456 "fast_io_fail_timeout_sec": 0, 00:09:11.456 "disable_auto_failback": false, 00:09:11.456 "generate_uuids": false, 00:09:11.456 "transport_tos": 0, 00:09:11.456 "nvme_error_stat": false, 00:09:11.456 "rdma_srq_size": 0, 00:09:11.456 "io_path_stat": false, 00:09:11.456 "allow_accel_sequence": false, 00:09:11.456 "rdma_max_cq_size": 0, 00:09:11.456 "rdma_cm_event_timeout_ms": 0, 00:09:11.456 "dhchap_digests": [ 00:09:11.456 "sha256", 00:09:11.456 "sha384", 00:09:11.456 "sha512" 00:09:11.456 ], 00:09:11.456 "dhchap_dhgroups": [ 00:09:11.456 "null", 00:09:11.456 "ffdhe2048", 00:09:11.456 "ffdhe3072", 00:09:11.456 "ffdhe4096", 00:09:11.456 "ffdhe6144", 00:09:11.456 "ffdhe8192" 00:09:11.456 ] 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "bdev_nvme_set_hotplug", 00:09:11.456 "params": { 00:09:11.456 "period_us": 100000, 00:09:11.456 "enable": false 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "bdev_iscsi_set_options", 00:09:11.456 "params": { 00:09:11.456 "timeout_sec": 30 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "bdev_wait_for_examine" 00:09:11.456 } 00:09:11.456 ] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "nvmf", 00:09:11.456 "config": [ 00:09:11.456 { 00:09:11.456 "method": "nvmf_set_config", 00:09:11.456 "params": { 00:09:11.456 "discovery_filter": "match_any", 00:09:11.456 "admin_cmd_passthru": { 00:09:11.456 "identify_ctrlr": false 00:09:11.456 } 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "nvmf_set_max_subsystems", 00:09:11.456 "params": { 00:09:11.456 "max_subsystems": 1024 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "nvmf_set_crdt", 00:09:11.456 "params": { 00:09:11.456 "crdt1": 0, 00:09:11.456 "crdt2": 0, 00:09:11.456 "crdt3": 0 00:09:11.456 } 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "method": "nvmf_create_transport", 00:09:11.456 "params": { 00:09:11.456 "trtype": "TCP", 00:09:11.456 "max_queue_depth": 128, 00:09:11.456 "max_io_qpairs_per_ctrlr": 127, 00:09:11.456 "in_capsule_data_size": 4096, 00:09:11.456 "max_io_size": 131072, 00:09:11.456 "io_unit_size": 131072, 00:09:11.456 "max_aq_depth": 128, 00:09:11.456 "num_shared_buffers": 511, 00:09:11.456 "buf_cache_size": 4294967295, 00:09:11.456 "dif_insert_or_strip": false, 00:09:11.456 "zcopy": false, 00:09:11.456 "c2h_success": true, 00:09:11.456 "sock_priority": 0, 00:09:11.456 "abort_timeout_sec": 1, 00:09:11.456 "ack_timeout": 0, 00:09:11.456 "data_wr_pool_size": 0 00:09:11.456 } 00:09:11.456 } 00:09:11.456 ] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "nbd", 00:09:11.456 "config": [] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "vhost_blk", 00:09:11.456 "config": [] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "scsi", 00:09:11.456 "config": null 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "iscsi", 00:09:11.456 "config": [ 00:09:11.456 { 00:09:11.456 "method": "iscsi_set_options", 00:09:11.456 "params": { 00:09:11.456 "node_base": "iqn.2016-06.io.spdk", 00:09:11.456 "max_sessions": 128, 00:09:11.456 "max_connections_per_session": 2, 00:09:11.456 "max_queue_depth": 64, 00:09:11.456 "default_time2wait": 2, 00:09:11.456 "default_time2retain": 20, 00:09:11.456 "first_burst_length": 8192, 00:09:11.456 "immediate_data": true, 00:09:11.456 "allow_duplicated_isid": false, 00:09:11.456 "error_recovery_level": 0, 00:09:11.456 "nop_timeout": 60, 00:09:11.456 "nop_in_interval": 30, 00:09:11.456 "disable_chap": false, 00:09:11.456 "require_chap": false, 00:09:11.456 "mutual_chap": false, 00:09:11.456 "chap_group": 0, 00:09:11.456 "max_large_datain_per_connection": 64, 00:09:11.456 "max_r2t_per_connection": 4, 00:09:11.456 "pdu_pool_size": 36864, 00:09:11.456 "immediate_data_pool_size": 16384, 00:09:11.456 "data_out_pool_size": 2048 00:09:11.456 } 00:09:11.456 } 00:09:11.456 ] 00:09:11.456 }, 00:09:11.456 { 00:09:11.456 "subsystem": "vhost_scsi", 00:09:11.456 "config": [] 00:09:11.456 } 00:09:11.456 ] 00:09:11.456 } 00:09:11.456 11:20:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 111260 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 111260 ']' 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 111260 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111260 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.457 killing process with pid 111260 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111260' 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 111260 00:09:11.457 11:20:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 111260 00:09:13.987 11:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=111312 00:09:13.987 11:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:13.987 11:20:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 111312 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 111312 ']' 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 111312 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111312 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.265 killing process with pid 111312 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111312' 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 111312 00:09:19.265 11:20:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 111312 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:20.642 00:09:20.642 real 0m10.550s 00:09:20.642 user 0m9.835s 00:09:20.642 sys 0m1.053s 00:09:20.642 ************************************ 00:09:20.642 END TEST skip_rpc_with_json 00:09:20.642 ************************************ 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:20.642 11:20:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:20.642 11:20:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:20.642 11:20:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.642 11:20:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.642 11:20:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.642 ************************************ 00:09:20.642 START TEST skip_rpc_with_delay 00:09:20.642 ************************************ 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:20.642 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:20.642 [2024-07-13 11:20:55.380972] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:20.642 [2024-07-13 11:20:55.381480] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:20.901 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:09:20.901 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.901 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:20.901 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.901 00:09:20.901 real 0m0.143s 00:09:20.901 user 0m0.073s 00:09:20.901 sys 0m0.068s 00:09:20.901 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.901 11:20:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:20.901 ************************************ 00:09:20.901 END TEST skip_rpc_with_delay 00:09:20.901 ************************************ 00:09:20.902 11:20:55 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:20.902 11:20:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:20.902 11:20:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:20.902 11:20:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:20.902 11:20:55 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:20.902 11:20:55 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.902 11:20:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.902 ************************************ 00:09:20.902 START TEST exit_on_failed_rpc_init 00:09:20.902 ************************************ 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=111466 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 111466 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 111466 ']' 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.902 11:20:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:20.902 [2024-07-13 11:20:55.577791] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:20.902 [2024-07-13 11:20:55.578035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111466 ] 00:09:21.161 [2024-07-13 11:20:55.749475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.419 [2024-07-13 11:20:55.940361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.987 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.988 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.988 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.988 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:21.988 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:21.988 11:20:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:22.246 [2024-07-13 11:20:56.761283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:22.246 [2024-07-13 11:20:56.761769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111489 ] 00:09:22.246 [2024-07-13 11:20:56.936286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.505 [2024-07-13 11:20:57.153764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.505 [2024-07-13 11:20:57.153887] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:22.505 [2024-07-13 11:20:57.153938] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:22.505 [2024-07-13 11:20:57.153963] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 111466 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 111466 ']' 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 111466 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111466 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:23.073 killing process with pid 111466 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111466' 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 111466 00:09:23.073 11:20:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 111466 00:09:24.978 00:09:24.978 real 0m4.075s 00:09:24.978 user 0m4.537s 00:09:24.978 sys 0m0.687s 00:09:24.978 11:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.978 11:20:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:24.978 ************************************ 00:09:24.978 END TEST exit_on_failed_rpc_init 00:09:24.978 ************************************ 00:09:24.978 11:20:59 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:09:24.978 11:20:59 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:24.978 ************************************ 00:09:24.978 END TEST skip_rpc 00:09:24.978 ************************************ 00:09:24.978 00:09:24.978 real 0m22.147s 00:09:24.978 user 0m21.112s 00:09:24.978 sys 0m2.406s 00:09:24.978 11:20:59 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.978 11:20:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:24.978 11:20:59 -- common/autotest_common.sh@1142 -- # return 0 00:09:24.978 11:20:59 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:24.978 11:20:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:24.978 11:20:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.978 11:20:59 -- common/autotest_common.sh@10 -- # set +x 00:09:24.978 ************************************ 00:09:24.978 START TEST rpc_client 00:09:24.978 ************************************ 00:09:24.978 11:20:59 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:25.237 * Looking for test storage... 00:09:25.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:25.237 11:20:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:25.237 OK 00:09:25.237 11:20:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:25.237 00:09:25.237 real 0m0.149s 00:09:25.237 user 0m0.093s 00:09:25.237 sys 0m0.066s 00:09:25.237 11:20:59 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.237 11:20:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:25.237 ************************************ 00:09:25.237 END TEST rpc_client 00:09:25.237 ************************************ 00:09:25.237 11:20:59 -- common/autotest_common.sh@1142 -- # return 0 00:09:25.237 11:20:59 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:25.237 11:20:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.237 11:20:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.237 11:20:59 -- common/autotest_common.sh@10 -- # set +x 00:09:25.237 ************************************ 00:09:25.237 START TEST json_config 00:09:25.237 ************************************ 00:09:25.237 11:20:59 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:25.237 11:20:59 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:be286c62-e2ba-4b20-9ff7-78d3f5986beb 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=be286c62-e2ba-4b20-9ff7-78d3f5986beb 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.237 11:20:59 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.237 11:20:59 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.237 11:20:59 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.237 11:20:59 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:25.237 11:20:59 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:25.237 11:20:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:25.237 11:20:59 json_config -- paths/export.sh@5 -- # export PATH 00:09:25.237 11:20:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@47 -- # : 0 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:25.237 11:20:59 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:25.237 11:20:59 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:25.237 11:20:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:25.237 11:20:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:25.237 11:20:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:25.237 11:20:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:25.237 11:20:59 json_config -- json_config/json_config.sh@31 -- # app_pid=([target]="" [initiator]="") 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@32 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@33 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@34 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:25.238 INFO: JSON configuration test init 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.238 11:20:59 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:25.238 11:20:59 json_config -- json_config/common.sh@9 -- # local app=target 00:09:25.238 11:20:59 json_config -- json_config/common.sh@10 -- # shift 00:09:25.238 11:20:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:25.238 11:20:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:25.238 11:20:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:25.238 11:20:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:25.238 11:20:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:25.238 11:20:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=111652 00:09:25.238 Waiting for target to run... 00:09:25.238 11:20:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:25.238 11:20:59 json_config -- json_config/common.sh@25 -- # waitforlisten 111652 /var/tmp/spdk_tgt.sock 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@829 -- # '[' -z 111652 ']' 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:25.238 11:20:59 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.238 11:20:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.497 [2024-07-13 11:21:00.026510] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:25.497 [2024-07-13 11:21:00.026731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111652 ] 00:09:25.755 [2024-07-13 11:21:00.485227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.014 [2024-07-13 11:21:00.677692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.630 00:09:26.630 11:21:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.630 11:21:01 json_config -- common/autotest_common.sh@862 -- # return 0 00:09:26.630 11:21:01 json_config -- json_config/common.sh@26 -- # echo '' 00:09:26.630 11:21:01 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:09:26.630 11:21:01 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:26.630 11:21:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:26.630 11:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:26.630 11:21:01 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:26.630 11:21:01 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:26.630 11:21:01 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.630 11:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:26.630 11:21:01 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:26.630 11:21:01 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:26.630 11:21:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:27.226 11:21:01 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:27.226 11:21:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:27.226 11:21:01 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.226 11:21:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.226 11:21:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:27.226 11:21:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=("bdev_register" "bdev_unregister") 00:09:27.226 11:21:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:27.226 11:21:01 json_config -- json_config/json_config.sh@48 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:09:27.226 11:21:01 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:27.226 11:21:01 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:27.226 11:21:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:27.484 11:21:02 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:27.484 11:21:02 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:27.484 11:21:02 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:27.484 11:21:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:27.485 11:21:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@55 -- # return 0 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:09:27.743 11:21:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.743 11:21:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:27.743 11:21:02 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:27.743 11:21:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:28.002 11:21:02 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:28.002 11:21:02 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:28.002 11:21:02 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:28.002 11:21:02 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:09:28.002 11:21:02 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:09:28.002 11:21:02 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:28.002 11:21:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:28.002 Nvme0n1p0 Nvme0n1p1 00:09:28.002 11:21:02 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:28.002 11:21:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:28.264 [2024-07-13 11:21:02.953912] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:28.264 [2024-07-13 11:21:02.954179] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:28.264 00:09:28.264 11:21:02 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:28.264 11:21:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:28.522 Malloc3 00:09:28.522 11:21:03 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:28.522 11:21:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:28.780 [2024-07-13 11:21:03.392702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:28.780 [2024-07-13 11:21:03.392954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.780 [2024-07-13 11:21:03.393150] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:28.780 [2024-07-13 11:21:03.393273] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.780 [2024-07-13 11:21:03.395707] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.780 [2024-07-13 11:21:03.395923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:28.780 PTBdevFromMalloc3 00:09:28.780 11:21:03 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:28.780 11:21:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:29.038 Null0 00:09:29.039 11:21:03 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:29.039 11:21:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:29.297 Malloc0 00:09:29.297 11:21:03 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:29.297 11:21:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:29.555 Malloc1 00:09:29.555 11:21:04 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:29.555 11:21:04 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:29.812 102400+0 records in 00:09:29.812 102400+0 records out 00:09:29.812 104857600 bytes (105 MB, 100 MiB) copied, 0.262519 s, 399 MB/s 00:09:29.812 11:21:04 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:29.812 11:21:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:30.069 aio_disk 00:09:30.069 11:21:04 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:30.069 11:21:04 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:30.069 11:21:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:30.069 5e17275a-2b96-4f07-98b8-57abce827d96 00:09:30.069 11:21:04 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:30.069 11:21:04 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:30.069 11:21:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:30.326 11:21:05 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:30.326 11:21:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:30.583 11:21:05 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:30.583 11:21:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:30.840 11:21:05 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:30.840 11:21:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:b26c0975-da50-49b6-9d9a-6e34d6ddda4a bdev_register:dc2de309-84f5-4194-99ea-85ccebd5aa97 bdev_register:6dc17ebd-59a4-466f-9fd9-2b10ccd50807 bdev_register:05aa7b4c-f181-4153-9f86-e4e5d785c084 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@71 -- # sort 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:b26c0975-da50-49b6-9d9a-6e34d6ddda4a bdev_register:dc2de309-84f5-4194-99ea-85ccebd5aa97 bdev_register:6dc17ebd-59a4-466f-9fd9-2b10ccd50807 bdev_register:05aa7b4c-f181-4153-9f86-e4e5d785c084 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@72 -- # sort 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:31.097 11:21:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:31.097 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:31.098 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.098 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:b26c0975-da50-49b6-9d9a-6e34d6ddda4a 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:dc2de309-84f5-4194-99ea-85ccebd5aa97 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:6dc17ebd-59a4-466f-9fd9-2b10ccd50807 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:05aa7b4c-f181-4153-9f86-e4e5d785c084 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:05aa7b4c-f181-4153-9f86-e4e5d785c084 bdev_register:6dc17ebd-59a4-466f-9fd9-2b10ccd50807 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b26c0975-da50-49b6-9d9a-6e34d6ddda4a bdev_register:dc2de309-84f5-4194-99ea-85ccebd5aa97 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\0\5\a\a\7\b\4\c\-\f\1\8\1\-\4\1\5\3\-\9\f\8\6\-\e\4\e\5\d\7\8\5\c\0\8\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\d\c\1\7\e\b\d\-\5\9\a\4\-\4\6\6\f\-\9\f\d\9\-\2\b\1\0\c\c\d\5\0\8\0\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\2\6\c\0\9\7\5\-\d\a\5\0\-\4\9\b\6\-\9\d\9\a\-\6\e\3\4\d\6\d\d\d\a\4\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\c\2\d\e\3\0\9\-\8\4\f\5\-\4\1\9\4\-\9\9\e\a\-\8\5\c\c\e\b\d\5\a\a\9\7 ]] 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@86 -- # cat 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:05aa7b4c-f181-4153-9f86-e4e5d785c084 bdev_register:6dc17ebd-59a4-466f-9fd9-2b10ccd50807 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b26c0975-da50-49b6-9d9a-6e34d6ddda4a bdev_register:dc2de309-84f5-4194-99ea-85ccebd5aa97 00:09:31.356 Expected events matched: 00:09:31.356 bdev_register:05aa7b4c-f181-4153-9f86-e4e5d785c084 00:09:31.356 bdev_register:6dc17ebd-59a4-466f-9fd9-2b10ccd50807 00:09:31.356 bdev_register:Malloc0 00:09:31.356 bdev_register:Malloc0p0 00:09:31.356 bdev_register:Malloc0p1 00:09:31.356 bdev_register:Malloc0p2 00:09:31.356 bdev_register:Malloc1 00:09:31.356 bdev_register:Malloc3 00:09:31.356 bdev_register:Null0 00:09:31.356 bdev_register:Nvme0n1 00:09:31.356 bdev_register:Nvme0n1p0 00:09:31.356 bdev_register:Nvme0n1p1 00:09:31.356 bdev_register:PTBdevFromMalloc3 00:09:31.356 bdev_register:aio_disk 00:09:31.356 bdev_register:b26c0975-da50-49b6-9d9a-6e34d6ddda4a 00:09:31.356 bdev_register:dc2de309-84f5-4194-99ea-85ccebd5aa97 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:09:31.356 11:21:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.356 11:21:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:09:31.356 11:21:05 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.356 11:21:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:09:31.356 11:21:05 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:31.356 11:21:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:31.614 MallocBdevForConfigChangeCheck 00:09:31.614 11:21:06 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:09:31.614 11:21:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.614 11:21:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:31.614 11:21:06 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:09:31.614 11:21:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:31.872 INFO: shutting down applications... 00:09:31.872 11:21:06 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:09:31.872 11:21:06 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:09:31.872 11:21:06 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:09:31.872 11:21:06 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:09:31.872 11:21:06 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:32.130 [2024-07-13 11:21:06.718158] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:09:32.388 Calling clear_vhost_scsi_subsystem 00:09:32.388 Calling clear_iscsi_subsystem 00:09:32.388 Calling clear_vhost_blk_subsystem 00:09:32.388 Calling clear_nbd_subsystem 00:09:32.388 Calling clear_nvmf_subsystem 00:09:32.388 Calling clear_bdev_subsystem 00:09:32.388 11:21:06 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:32.388 11:21:06 json_config -- json_config/json_config.sh@343 -- # count=100 00:09:32.388 11:21:06 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:09:32.388 11:21:06 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:32.388 11:21:06 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:32.388 11:21:06 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:32.645 11:21:07 json_config -- json_config/json_config.sh@345 -- # break 00:09:32.645 11:21:07 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:09:32.645 11:21:07 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:09:32.646 11:21:07 json_config -- json_config/common.sh@31 -- # local app=target 00:09:32.646 11:21:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:32.646 11:21:07 json_config -- json_config/common.sh@35 -- # [[ -n 111652 ]] 00:09:32.646 11:21:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 111652 00:09:32.646 11:21:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:32.646 11:21:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:32.646 11:21:07 json_config -- json_config/common.sh@41 -- # kill -0 111652 00:09:32.646 11:21:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:33.211 11:21:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:33.211 11:21:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:33.211 11:21:07 json_config -- json_config/common.sh@41 -- # kill -0 111652 00:09:33.211 11:21:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:33.776 SPDK target shutdown done 00:09:33.776 INFO: relaunching applications... 00:09:33.776 Waiting for target to run... 00:09:33.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:33.776 11:21:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:33.776 11:21:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:33.776 11:21:08 json_config -- json_config/common.sh@41 -- # kill -0 111652 00:09:33.776 11:21:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:33.776 11:21:08 json_config -- json_config/common.sh@43 -- # break 00:09:33.776 11:21:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:33.776 11:21:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:33.776 11:21:08 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:09:33.776 11:21:08 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:33.776 11:21:08 json_config -- json_config/common.sh@9 -- # local app=target 00:09:33.776 11:21:08 json_config -- json_config/common.sh@10 -- # shift 00:09:33.776 11:21:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:33.776 11:21:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:33.776 11:21:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:33.776 11:21:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:33.776 11:21:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:33.776 11:21:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=111923 00:09:33.776 11:21:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:33.776 11:21:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:33.776 11:21:08 json_config -- json_config/common.sh@25 -- # waitforlisten 111923 /var/tmp/spdk_tgt.sock 00:09:33.776 11:21:08 json_config -- common/autotest_common.sh@829 -- # '[' -z 111923 ']' 00:09:33.776 11:21:08 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:33.776 11:21:08 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:33.776 11:21:08 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:33.776 11:21:08 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:33.776 11:21:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:33.776 [2024-07-13 11:21:08.371635] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:33.776 [2024-07-13 11:21:08.372299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111923 ] 00:09:34.343 [2024-07-13 11:21:08.807345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.343 [2024-07-13 11:21:08.979585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.911 [2024-07-13 11:21:09.597293] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:34.911 [2024-07-13 11:21:09.597672] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:09:34.911 [2024-07-13 11:21:09.605236] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:34.911 [2024-07-13 11:21:09.605445] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:34.911 [2024-07-13 11:21:09.613261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:34.911 [2024-07-13 11:21:09.613446] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:09:34.911 [2024-07-13 11:21:09.613579] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:09:35.171 [2024-07-13 11:21:09.707375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:35.171 [2024-07-13 11:21:09.707605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.171 [2024-07-13 11:21:09.707690] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:35.171 [2024-07-13 11:21:09.707899] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.171 [2024-07-13 11:21:09.708435] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.171 [2024-07-13 11:21:09.708599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:35.171 00:09:35.171 INFO: Checking if target configuration is the same... 00:09:35.171 11:21:09 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.171 11:21:09 json_config -- common/autotest_common.sh@862 -- # return 0 00:09:35.171 11:21:09 json_config -- json_config/common.sh@26 -- # echo '' 00:09:35.171 11:21:09 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:09:35.171 11:21:09 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:35.171 11:21:09 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:35.171 11:21:09 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:09:35.171 11:21:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:35.171 + '[' 2 -ne 2 ']' 00:09:35.171 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:35.171 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:35.171 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:35.171 +++ basename /dev/fd/62 00:09:35.171 ++ mktemp /tmp/62.XXX 00:09:35.171 + tmp_file_1=/tmp/62.zCN 00:09:35.171 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:35.171 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:35.171 + tmp_file_2=/tmp/spdk_tgt_config.json.pXO 00:09:35.171 + ret=0 00:09:35.171 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:35.739 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:35.739 + diff -u /tmp/62.zCN /tmp/spdk_tgt_config.json.pXO 00:09:35.739 INFO: JSON config files are the same 00:09:35.739 + echo 'INFO: JSON config files are the same' 00:09:35.739 + rm /tmp/62.zCN /tmp/spdk_tgt_config.json.pXO 00:09:35.739 + exit 0 00:09:35.739 11:21:10 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:09:35.739 INFO: changing configuration and checking if this can be detected... 00:09:35.739 11:21:10 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:35.739 11:21:10 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:35.739 11:21:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:35.998 11:21:10 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:35.998 11:21:10 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:09:35.998 11:21:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:35.998 + '[' 2 -ne 2 ']' 00:09:35.998 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:35.998 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:35.998 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:35.998 +++ basename /dev/fd/62 00:09:35.998 ++ mktemp /tmp/62.XXX 00:09:35.998 + tmp_file_1=/tmp/62.THi 00:09:35.998 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:35.998 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:35.998 + tmp_file_2=/tmp/spdk_tgt_config.json.3dK 00:09:35.998 + ret=0 00:09:35.998 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:36.257 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:36.257 + diff -u /tmp/62.THi /tmp/spdk_tgt_config.json.3dK 00:09:36.257 + ret=1 00:09:36.257 + echo '=== Start of file: /tmp/62.THi ===' 00:09:36.257 + cat /tmp/62.THi 00:09:36.257 + echo '=== End of file: /tmp/62.THi ===' 00:09:36.257 + echo '' 00:09:36.257 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3dK ===' 00:09:36.257 + cat /tmp/spdk_tgt_config.json.3dK 00:09:36.258 + echo '=== End of file: /tmp/spdk_tgt_config.json.3dK ===' 00:09:36.258 + echo '' 00:09:36.258 + rm /tmp/62.THi /tmp/spdk_tgt_config.json.3dK 00:09:36.258 + exit 1 00:09:36.258 INFO: configuration change detected. 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:09:36.258 11:21:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.258 11:21:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@317 -- # [[ -n 111923 ]] 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:09:36.258 11:21:10 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.258 11:21:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:09:36.258 11:21:10 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:09:36.258 11:21:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:09:36.515 11:21:11 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:09:36.515 11:21:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:09:36.773 11:21:11 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:09:36.773 11:21:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:09:37.031 11:21:11 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:09:37.031 11:21:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:09:37.288 11:21:12 json_config -- json_config/json_config.sh@193 -- # uname -s 00:09:37.288 11:21:12 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:09:37.288 11:21:12 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:09:37.288 11:21:12 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:09:37.288 11:21:12 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:09:37.288 11:21:12 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.288 11:21:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:37.547 11:21:12 json_config -- json_config/json_config.sh@323 -- # killprocess 111923 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@948 -- # '[' -z 111923 ']' 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@952 -- # kill -0 111923 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@953 -- # uname 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 111923 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:37.547 killing process with pid 111923 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 111923' 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@967 -- # kill 111923 00:09:37.547 11:21:12 json_config -- common/autotest_common.sh@972 -- # wait 111923 00:09:38.924 11:21:13 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:38.924 11:21:13 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:09:38.924 11:21:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:38.924 11:21:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:38.924 11:21:13 json_config -- json_config/json_config.sh@328 -- # return 0 00:09:38.924 INFO: Success 00:09:38.924 11:21:13 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:09:38.924 00:09:38.924 real 0m13.473s 00:09:38.924 user 0m19.347s 00:09:38.924 sys 0m2.163s 00:09:38.924 11:21:13 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:38.924 11:21:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:38.924 ************************************ 00:09:38.924 END TEST json_config 00:09:38.924 ************************************ 00:09:38.924 11:21:13 -- common/autotest_common.sh@1142 -- # return 0 00:09:38.924 11:21:13 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:38.924 11:21:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:38.924 11:21:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.924 11:21:13 -- common/autotest_common.sh@10 -- # set +x 00:09:38.924 ************************************ 00:09:38.924 START TEST json_config_extra_key 00:09:38.924 ************************************ 00:09:38.924 11:21:13 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:47661e9a-91d7-48c5-b615-179ff4693e80 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=47661e9a-91d7-48c5-b615-179ff4693e80 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.924 11:21:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.924 11:21:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.924 11:21:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.924 11:21:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:38.924 11:21:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:38.924 11:21:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:38.924 11:21:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:38.924 11:21:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.924 11:21:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=([target]="") 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=([target]='-m 0x1 -s 1024') 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:38.924 INFO: launching applications... 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:38.924 11:21:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=112117 00:09:38.924 Waiting for target to run... 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 112117 /var/tmp/spdk_tgt.sock 00:09:38.924 11:21:13 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 112117 ']' 00:09:38.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:38.924 11:21:13 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:38.924 11:21:13 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:38.924 11:21:13 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.924 11:21:13 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:38.924 11:21:13 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.924 11:21:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:38.924 [2024-07-13 11:21:13.539110] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:38.924 [2024-07-13 11:21:13.539351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112117 ] 00:09:39.492 [2024-07-13 11:21:14.107378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.750 [2024-07-13 11:21:14.361488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.684 11:21:15 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:40.684 11:21:15 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:09:40.684 11:21:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:40.684 00:09:40.684 INFO: shutting down applications... 00:09:40.684 11:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:40.684 11:21:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:40.684 11:21:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:40.684 11:21:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:40.684 11:21:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 112117 ]] 00:09:40.684 11:21:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 112117 00:09:40.684 11:21:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:40.684 11:21:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:40.684 11:21:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112117 00:09:40.684 11:21:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:40.942 11:21:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:40.942 11:21:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:40.942 11:21:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112117 00:09:40.942 11:21:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:41.508 11:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:41.508 11:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:41.508 11:21:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112117 00:09:41.508 11:21:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:42.075 11:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:42.075 11:21:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:42.075 11:21:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112117 00:09:42.075 11:21:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:42.642 11:21:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:42.642 11:21:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:42.642 11:21:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112117 00:09:42.642 11:21:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:42.900 11:21:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:42.901 11:21:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:42.901 11:21:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112117 00:09:42.901 11:21:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:43.469 11:21:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:43.469 11:21:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:43.469 11:21:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 112117 00:09:43.469 SPDK target shutdown done 00:09:43.469 11:21:18 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:43.469 11:21:18 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:43.469 11:21:18 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:43.469 11:21:18 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:43.469 Success 00:09:43.469 11:21:18 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:43.469 00:09:43.469 real 0m4.722s 00:09:43.469 user 0m4.443s 00:09:43.469 sys 0m0.640s 00:09:43.469 11:21:18 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.469 11:21:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:43.469 ************************************ 00:09:43.469 END TEST json_config_extra_key 00:09:43.469 ************************************ 00:09:43.469 11:21:18 -- common/autotest_common.sh@1142 -- # return 0 00:09:43.469 11:21:18 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:43.469 11:21:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:43.469 11:21:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.469 11:21:18 -- common/autotest_common.sh@10 -- # set +x 00:09:43.469 ************************************ 00:09:43.469 START TEST alias_rpc 00:09:43.469 ************************************ 00:09:43.469 11:21:18 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:43.728 * Looking for test storage... 00:09:43.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:43.728 11:21:18 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:43.728 11:21:18 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=112232 00:09:43.728 11:21:18 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:43.728 11:21:18 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 112232 00:09:43.728 11:21:18 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 112232 ']' 00:09:43.728 11:21:18 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.728 11:21:18 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:43.728 11:21:18 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.728 11:21:18 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:43.728 11:21:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.728 [2024-07-13 11:21:18.319434] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:43.728 [2024-07-13 11:21:18.319831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112232 ] 00:09:43.987 [2024-07-13 11:21:18.482612] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.245 [2024-07-13 11:21:18.732241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:45.178 11:21:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:45.178 11:21:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 112232 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 112232 ']' 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 112232 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112232 00:09:45.178 killing process with pid 112232 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112232' 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@967 -- # kill 112232 00:09:45.178 11:21:19 alias_rpc -- common/autotest_common.sh@972 -- # wait 112232 00:09:47.708 00:09:47.708 real 0m3.969s 00:09:47.708 user 0m4.102s 00:09:47.708 sys 0m0.613s 00:09:47.708 11:21:22 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.708 ************************************ 00:09:47.708 END TEST alias_rpc 00:09:47.708 ************************************ 00:09:47.708 11:21:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:47.708 11:21:22 -- common/autotest_common.sh@1142 -- # return 0 00:09:47.708 11:21:22 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:09:47.708 11:21:22 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:47.708 11:21:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:47.708 11:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.708 11:21:22 -- common/autotest_common.sh@10 -- # set +x 00:09:47.708 ************************************ 00:09:47.708 START TEST spdkcli_tcp 00:09:47.708 ************************************ 00:09:47.708 11:21:22 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:09:47.708 * Looking for test storage... 00:09:47.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:47.708 11:21:22 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.708 11:21:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=112355 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 112355 00:09:47.708 11:21:22 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:47.708 11:21:22 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 112355 ']' 00:09:47.708 11:21:22 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.708 11:21:22 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.708 11:21:22 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.708 11:21:22 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.708 11:21:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:47.708 [2024-07-13 11:21:22.346618] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:47.708 [2024-07-13 11:21:22.347058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112355 ] 00:09:47.967 [2024-07-13 11:21:22.507836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:47.967 [2024-07-13 11:21:22.702918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.967 [2024-07-13 11:21:22.702922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.900 11:21:23 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.900 11:21:23 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:09:48.900 11:21:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=112384 00:09:48.900 11:21:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:48.900 11:21:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:49.159 [ 00:09:49.159 "spdk_get_version", 00:09:49.159 "rpc_get_methods", 00:09:49.159 "keyring_get_keys", 00:09:49.159 "trace_get_info", 00:09:49.159 "trace_get_tpoint_group_mask", 00:09:49.159 "trace_disable_tpoint_group", 00:09:49.159 "trace_enable_tpoint_group", 00:09:49.159 "trace_clear_tpoint_mask", 00:09:49.159 "trace_set_tpoint_mask", 00:09:49.159 "framework_get_pci_devices", 00:09:49.159 "framework_get_config", 00:09:49.159 "framework_get_subsystems", 00:09:49.159 "iobuf_get_stats", 00:09:49.159 "iobuf_set_options", 00:09:49.159 "sock_get_default_impl", 00:09:49.159 "sock_set_default_impl", 00:09:49.159 "sock_impl_set_options", 00:09:49.159 "sock_impl_get_options", 00:09:49.159 "vmd_rescan", 00:09:49.159 "vmd_remove_device", 00:09:49.159 "vmd_enable", 00:09:49.159 "accel_get_stats", 00:09:49.159 "accel_set_options", 00:09:49.159 "accel_set_driver", 00:09:49.159 "accel_crypto_key_destroy", 00:09:49.160 "accel_crypto_keys_get", 00:09:49.160 "accel_crypto_key_create", 00:09:49.160 "accel_assign_opc", 00:09:49.160 "accel_get_module_info", 00:09:49.160 "accel_get_opc_assignments", 00:09:49.160 "notify_get_notifications", 00:09:49.160 "notify_get_types", 00:09:49.160 "bdev_get_histogram", 00:09:49.160 "bdev_enable_histogram", 00:09:49.160 "bdev_set_qos_limit", 00:09:49.160 "bdev_set_qd_sampling_period", 00:09:49.160 "bdev_get_bdevs", 00:09:49.160 "bdev_reset_iostat", 00:09:49.160 "bdev_get_iostat", 00:09:49.160 "bdev_examine", 00:09:49.160 "bdev_wait_for_examine", 00:09:49.160 "bdev_set_options", 00:09:49.160 "scsi_get_devices", 00:09:49.160 "thread_set_cpumask", 00:09:49.160 "framework_get_governor", 00:09:49.160 "framework_get_scheduler", 00:09:49.160 "framework_set_scheduler", 00:09:49.160 "framework_get_reactors", 00:09:49.160 "thread_get_io_channels", 00:09:49.160 "thread_get_pollers", 00:09:49.160 "thread_get_stats", 00:09:49.160 "framework_monitor_context_switch", 00:09:49.160 "spdk_kill_instance", 00:09:49.160 "log_enable_timestamps", 00:09:49.160 "log_get_flags", 00:09:49.160 "log_clear_flag", 00:09:49.160 "log_set_flag", 00:09:49.160 "log_get_level", 00:09:49.160 "log_set_level", 00:09:49.160 "log_get_print_level", 00:09:49.160 "log_set_print_level", 00:09:49.160 "framework_enable_cpumask_locks", 00:09:49.160 "framework_disable_cpumask_locks", 00:09:49.160 "framework_wait_init", 00:09:49.160 "framework_start_init", 00:09:49.160 "virtio_blk_create_transport", 00:09:49.160 "virtio_blk_get_transports", 00:09:49.160 "vhost_controller_set_coalescing", 00:09:49.160 "vhost_get_controllers", 00:09:49.160 "vhost_delete_controller", 00:09:49.160 "vhost_create_blk_controller", 00:09:49.160 "vhost_scsi_controller_remove_target", 00:09:49.160 "vhost_scsi_controller_add_target", 00:09:49.160 "vhost_start_scsi_controller", 00:09:49.160 "vhost_create_scsi_controller", 00:09:49.160 "nbd_get_disks", 00:09:49.160 "nbd_stop_disk", 00:09:49.160 "nbd_start_disk", 00:09:49.160 "env_dpdk_get_mem_stats", 00:09:49.160 "nvmf_stop_mdns_prr", 00:09:49.160 "nvmf_publish_mdns_prr", 00:09:49.160 "nvmf_subsystem_get_listeners", 00:09:49.160 "nvmf_subsystem_get_qpairs", 00:09:49.160 "nvmf_subsystem_get_controllers", 00:09:49.160 "nvmf_get_stats", 00:09:49.160 "nvmf_get_transports", 00:09:49.160 "nvmf_create_transport", 00:09:49.160 "nvmf_get_targets", 00:09:49.160 "nvmf_delete_target", 00:09:49.160 "nvmf_create_target", 00:09:49.160 "nvmf_subsystem_allow_any_host", 00:09:49.160 "nvmf_subsystem_remove_host", 00:09:49.160 "nvmf_subsystem_add_host", 00:09:49.160 "nvmf_ns_remove_host", 00:09:49.160 "nvmf_ns_add_host", 00:09:49.160 "nvmf_subsystem_remove_ns", 00:09:49.160 "nvmf_subsystem_add_ns", 00:09:49.160 "nvmf_subsystem_listener_set_ana_state", 00:09:49.160 "nvmf_discovery_get_referrals", 00:09:49.160 "nvmf_discovery_remove_referral", 00:09:49.160 "nvmf_discovery_add_referral", 00:09:49.160 "nvmf_subsystem_remove_listener", 00:09:49.160 "nvmf_subsystem_add_listener", 00:09:49.160 "nvmf_delete_subsystem", 00:09:49.160 "nvmf_create_subsystem", 00:09:49.160 "nvmf_get_subsystems", 00:09:49.160 "nvmf_set_crdt", 00:09:49.160 "nvmf_set_config", 00:09:49.160 "nvmf_set_max_subsystems", 00:09:49.160 "iscsi_get_histogram", 00:09:49.160 "iscsi_enable_histogram", 00:09:49.160 "iscsi_set_options", 00:09:49.160 "iscsi_get_auth_groups", 00:09:49.160 "iscsi_auth_group_remove_secret", 00:09:49.160 "iscsi_auth_group_add_secret", 00:09:49.160 "iscsi_delete_auth_group", 00:09:49.160 "iscsi_create_auth_group", 00:09:49.160 "iscsi_set_discovery_auth", 00:09:49.160 "iscsi_get_options", 00:09:49.160 "iscsi_target_node_request_logout", 00:09:49.160 "iscsi_target_node_set_redirect", 00:09:49.160 "iscsi_target_node_set_auth", 00:09:49.160 "iscsi_target_node_add_lun", 00:09:49.160 "iscsi_get_stats", 00:09:49.160 "iscsi_get_connections", 00:09:49.160 "iscsi_portal_group_set_auth", 00:09:49.160 "iscsi_start_portal_group", 00:09:49.160 "iscsi_delete_portal_group", 00:09:49.160 "iscsi_create_portal_group", 00:09:49.160 "iscsi_get_portal_groups", 00:09:49.160 "iscsi_delete_target_node", 00:09:49.160 "iscsi_target_node_remove_pg_ig_maps", 00:09:49.160 "iscsi_target_node_add_pg_ig_maps", 00:09:49.160 "iscsi_create_target_node", 00:09:49.160 "iscsi_get_target_nodes", 00:09:49.160 "iscsi_delete_initiator_group", 00:09:49.160 "iscsi_initiator_group_remove_initiators", 00:09:49.160 "iscsi_initiator_group_add_initiators", 00:09:49.160 "iscsi_create_initiator_group", 00:09:49.160 "iscsi_get_initiator_groups", 00:09:49.160 "keyring_linux_set_options", 00:09:49.160 "keyring_file_remove_key", 00:09:49.160 "keyring_file_add_key", 00:09:49.160 "iaa_scan_accel_module", 00:09:49.160 "dsa_scan_accel_module", 00:09:49.160 "ioat_scan_accel_module", 00:09:49.160 "accel_error_inject_error", 00:09:49.160 "bdev_iscsi_delete", 00:09:49.160 "bdev_iscsi_create", 00:09:49.160 "bdev_iscsi_set_options", 00:09:49.160 "bdev_virtio_attach_controller", 00:09:49.160 "bdev_virtio_scsi_get_devices", 00:09:49.160 "bdev_virtio_detach_controller", 00:09:49.160 "bdev_virtio_blk_set_hotplug", 00:09:49.160 "bdev_ftl_set_property", 00:09:49.160 "bdev_ftl_get_properties", 00:09:49.160 "bdev_ftl_get_stats", 00:09:49.160 "bdev_ftl_unmap", 00:09:49.160 "bdev_ftl_unload", 00:09:49.160 "bdev_ftl_delete", 00:09:49.160 "bdev_ftl_load", 00:09:49.160 "bdev_ftl_create", 00:09:49.160 "bdev_aio_delete", 00:09:49.160 "bdev_aio_rescan", 00:09:49.160 "bdev_aio_create", 00:09:49.160 "blobfs_create", 00:09:49.160 "blobfs_detect", 00:09:49.160 "blobfs_set_cache_size", 00:09:49.160 "bdev_zone_block_delete", 00:09:49.160 "bdev_zone_block_create", 00:09:49.160 "bdev_delay_delete", 00:09:49.160 "bdev_delay_create", 00:09:49.160 "bdev_delay_update_latency", 00:09:49.160 "bdev_split_delete", 00:09:49.160 "bdev_split_create", 00:09:49.160 "bdev_error_inject_error", 00:09:49.160 "bdev_error_delete", 00:09:49.160 "bdev_error_create", 00:09:49.160 "bdev_raid_set_options", 00:09:49.160 "bdev_raid_remove_base_bdev", 00:09:49.160 "bdev_raid_add_base_bdev", 00:09:49.160 "bdev_raid_delete", 00:09:49.160 "bdev_raid_create", 00:09:49.160 "bdev_raid_get_bdevs", 00:09:49.160 "bdev_lvol_set_parent_bdev", 00:09:49.160 "bdev_lvol_set_parent", 00:09:49.160 "bdev_lvol_check_shallow_copy", 00:09:49.160 "bdev_lvol_start_shallow_copy", 00:09:49.160 "bdev_lvol_grow_lvstore", 00:09:49.160 "bdev_lvol_get_lvols", 00:09:49.160 "bdev_lvol_get_lvstores", 00:09:49.160 "bdev_lvol_delete", 00:09:49.160 "bdev_lvol_set_read_only", 00:09:49.160 "bdev_lvol_resize", 00:09:49.160 "bdev_lvol_decouple_parent", 00:09:49.160 "bdev_lvol_inflate", 00:09:49.160 "bdev_lvol_rename", 00:09:49.160 "bdev_lvol_clone_bdev", 00:09:49.160 "bdev_lvol_clone", 00:09:49.160 "bdev_lvol_snapshot", 00:09:49.160 "bdev_lvol_create", 00:09:49.160 "bdev_lvol_delete_lvstore", 00:09:49.160 "bdev_lvol_rename_lvstore", 00:09:49.160 "bdev_lvol_create_lvstore", 00:09:49.160 "bdev_passthru_delete", 00:09:49.160 "bdev_passthru_create", 00:09:49.160 "bdev_nvme_cuse_unregister", 00:09:49.160 "bdev_nvme_cuse_register", 00:09:49.160 "bdev_opal_new_user", 00:09:49.160 "bdev_opal_set_lock_state", 00:09:49.160 "bdev_opal_delete", 00:09:49.160 "bdev_opal_get_info", 00:09:49.160 "bdev_opal_create", 00:09:49.160 "bdev_nvme_opal_revert", 00:09:49.160 "bdev_nvme_opal_init", 00:09:49.160 "bdev_nvme_send_cmd", 00:09:49.160 "bdev_nvme_get_path_iostat", 00:09:49.160 "bdev_nvme_get_mdns_discovery_info", 00:09:49.160 "bdev_nvme_stop_mdns_discovery", 00:09:49.160 "bdev_nvme_start_mdns_discovery", 00:09:49.160 "bdev_nvme_set_multipath_policy", 00:09:49.160 "bdev_nvme_set_preferred_path", 00:09:49.160 "bdev_nvme_get_io_paths", 00:09:49.160 "bdev_nvme_remove_error_injection", 00:09:49.160 "bdev_nvme_add_error_injection", 00:09:49.160 "bdev_nvme_get_discovery_info", 00:09:49.160 "bdev_nvme_stop_discovery", 00:09:49.160 "bdev_nvme_start_discovery", 00:09:49.160 "bdev_nvme_get_controller_health_info", 00:09:49.160 "bdev_nvme_disable_controller", 00:09:49.160 "bdev_nvme_enable_controller", 00:09:49.160 "bdev_nvme_reset_controller", 00:09:49.160 "bdev_nvme_get_transport_statistics", 00:09:49.160 "bdev_nvme_apply_firmware", 00:09:49.160 "bdev_nvme_detach_controller", 00:09:49.160 "bdev_nvme_get_controllers", 00:09:49.160 "bdev_nvme_attach_controller", 00:09:49.160 "bdev_nvme_set_hotplug", 00:09:49.160 "bdev_nvme_set_options", 00:09:49.160 "bdev_null_resize", 00:09:49.160 "bdev_null_delete", 00:09:49.160 "bdev_null_create", 00:09:49.160 "bdev_malloc_delete", 00:09:49.160 "bdev_malloc_create" 00:09:49.160 ] 00:09:49.160 11:21:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.160 11:21:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:49.160 11:21:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 112355 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 112355 ']' 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 112355 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112355 00:09:49.160 killing process with pid 112355 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112355' 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 112355 00:09:49.160 11:21:23 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 112355 00:09:51.061 ************************************ 00:09:51.061 END TEST spdkcli_tcp 00:09:51.061 ************************************ 00:09:51.061 00:09:51.061 real 0m3.549s 00:09:51.061 user 0m6.281s 00:09:51.061 sys 0m0.599s 00:09:51.061 11:21:25 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.061 11:21:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:51.061 11:21:25 -- common/autotest_common.sh@1142 -- # return 0 00:09:51.061 11:21:25 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:51.061 11:21:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:51.061 11:21:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.061 11:21:25 -- common/autotest_common.sh@10 -- # set +x 00:09:51.061 ************************************ 00:09:51.061 START TEST dpdk_mem_utility 00:09:51.061 ************************************ 00:09:51.061 11:21:25 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:51.319 * Looking for test storage... 00:09:51.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:51.319 11:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:51.319 11:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=112483 00:09:51.319 11:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 112483 00:09:51.319 11:21:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.319 11:21:25 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 112483 ']' 00:09:51.319 11:21:25 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.319 11:21:25 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.319 11:21:25 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.319 11:21:25 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.319 11:21:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:51.319 [2024-07-13 11:21:25.929306] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:51.319 [2024-07-13 11:21:25.929723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112483 ] 00:09:51.577 [2024-07-13 11:21:26.091482] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.835 [2024-07-13 11:21:26.326858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.401 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:52.401 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:09:52.401 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:52.401 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:52.401 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.401 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:52.401 { 00:09:52.401 "filename": "/tmp/spdk_mem_dump.txt" 00:09:52.401 } 00:09:52.401 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.401 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:52.401 DPDK memory size 820.000000 MiB in 1 heap(s) 00:09:52.401 1 heaps totaling size 820.000000 MiB 00:09:52.401 size: 820.000000 MiB heap id: 0 00:09:52.401 end heaps---------- 00:09:52.401 8 mempools totaling size 598.116089 MiB 00:09:52.401 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:52.401 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:52.401 size: 84.521057 MiB name: bdev_io_112483 00:09:52.401 size: 51.011292 MiB name: evtpool_112483 00:09:52.401 size: 50.003479 MiB name: msgpool_112483 00:09:52.401 size: 21.763794 MiB name: PDU_Pool 00:09:52.401 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:52.401 size: 0.026123 MiB name: Session_Pool 00:09:52.401 end mempools------- 00:09:52.401 6 memzones totaling size 4.142822 MiB 00:09:52.401 size: 1.000366 MiB name: RG_ring_0_112483 00:09:52.401 size: 1.000366 MiB name: RG_ring_1_112483 00:09:52.401 size: 1.000366 MiB name: RG_ring_4_112483 00:09:52.401 size: 1.000366 MiB name: RG_ring_5_112483 00:09:52.401 size: 0.125366 MiB name: RG_ring_2_112483 00:09:52.401 size: 0.015991 MiB name: RG_ring_3_112483 00:09:52.401 end memzones------- 00:09:52.401 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:52.660 heap id: 0 total size: 820.000000 MiB number of busy elements: 230 number of free elements: 18 00:09:52.660 list of free elements. size: 18.468750 MiB 00:09:52.660 element at address: 0x200000400000 with size: 1.999451 MiB 00:09:52.660 element at address: 0x200000800000 with size: 1.996887 MiB 00:09:52.660 element at address: 0x200007000000 with size: 1.995972 MiB 00:09:52.660 element at address: 0x20000b200000 with size: 1.995972 MiB 00:09:52.660 element at address: 0x200019100040 with size: 0.999939 MiB 00:09:52.660 element at address: 0x200019500040 with size: 0.999939 MiB 00:09:52.660 element at address: 0x200019600000 with size: 0.999329 MiB 00:09:52.660 element at address: 0x200003e00000 with size: 0.996094 MiB 00:09:52.660 element at address: 0x200032200000 with size: 0.994324 MiB 00:09:52.660 element at address: 0x200018e00000 with size: 0.959656 MiB 00:09:52.660 element at address: 0x200019900040 with size: 0.937256 MiB 00:09:52.660 element at address: 0x200000200000 with size: 0.834106 MiB 00:09:52.660 element at address: 0x20001b000000 with size: 0.560974 MiB 00:09:52.660 element at address: 0x200019200000 with size: 0.489197 MiB 00:09:52.660 element at address: 0x200019a00000 with size: 0.485413 MiB 00:09:52.660 element at address: 0x200013800000 with size: 0.468140 MiB 00:09:52.660 element at address: 0x200028400000 with size: 0.399963 MiB 00:09:52.660 element at address: 0x200003a00000 with size: 0.356140 MiB 00:09:52.660 list of standard malloc elements. size: 199.266846 MiB 00:09:52.660 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:09:52.660 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:09:52.660 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:09:52.660 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:09:52.660 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:09:52.660 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:09:52.660 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:09:52.660 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:09:52.660 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:09:52.660 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:09:52.660 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:09:52.660 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:09:52.660 element at address: 0x200003aff980 with size: 0.000244 MiB 00:09:52.660 element at address: 0x200003affa80 with size: 0.000244 MiB 00:09:52.660 element at address: 0x200003eff000 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:09:52.660 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:09:52.661 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:09:52.661 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200013877d80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200013877e80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200013877f80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200013878080 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200013878180 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200013878280 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200013878380 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200013878480 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200013878580 with size: 0.000244 MiB 00:09:52.661 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200019abc680 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b08f9c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200028466640 with size: 0.000244 MiB 00:09:52.661 element at address: 0x200028466740 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846d400 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846d680 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846d780 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846d880 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846d980 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846da80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846db80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846de80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846df80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e080 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e180 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e280 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e380 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e480 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e580 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e680 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e780 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e880 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846e980 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f080 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f180 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f280 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f380 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f480 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f580 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f680 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f780 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f880 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846f980 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:09:52.661 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:09:52.661 list of memzone associated elements. size: 602.264404 MiB 00:09:52.661 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:09:52.662 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:52.662 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:09:52.662 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:52.662 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:09:52.662 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_112483_0 00:09:52.662 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:09:52.662 associated memzone info: size: 48.002930 MiB name: MP_evtpool_112483_0 00:09:52.662 element at address: 0x200003fff340 with size: 48.003113 MiB 00:09:52.662 associated memzone info: size: 48.002930 MiB name: MP_msgpool_112483_0 00:09:52.662 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:09:52.662 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:52.662 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:09:52.662 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:52.662 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:09:52.662 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_112483 00:09:52.662 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:09:52.662 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_112483 00:09:52.662 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:09:52.662 associated memzone info: size: 1.007996 MiB name: MP_evtpool_112483 00:09:52.662 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:09:52.662 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:52.662 element at address: 0x200019abc780 with size: 1.008179 MiB 00:09:52.662 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:52.662 element at address: 0x200018efde00 with size: 1.008179 MiB 00:09:52.662 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:52.662 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:09:52.662 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:52.662 element at address: 0x200003eff100 with size: 1.000549 MiB 00:09:52.662 associated memzone info: size: 1.000366 MiB name: RG_ring_0_112483 00:09:52.662 element at address: 0x200003affb80 with size: 1.000549 MiB 00:09:52.662 associated memzone info: size: 1.000366 MiB name: RG_ring_1_112483 00:09:52.662 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:09:52.662 associated memzone info: size: 1.000366 MiB name: RG_ring_4_112483 00:09:52.662 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:09:52.662 associated memzone info: size: 1.000366 MiB name: RG_ring_5_112483 00:09:52.662 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:09:52.662 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_112483 00:09:52.662 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:09:52.662 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:52.662 element at address: 0x200013878680 with size: 0.500549 MiB 00:09:52.662 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:52.662 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:09:52.662 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:52.662 element at address: 0x200003adf740 with size: 0.125549 MiB 00:09:52.662 associated memzone info: size: 0.125366 MiB name: RG_ring_2_112483 00:09:52.662 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:09:52.662 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:52.662 element at address: 0x200028466840 with size: 0.023804 MiB 00:09:52.662 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:52.662 element at address: 0x200003adb500 with size: 0.016174 MiB 00:09:52.662 associated memzone info: size: 0.015991 MiB name: RG_ring_3_112483 00:09:52.662 element at address: 0x20002846c9c0 with size: 0.002502 MiB 00:09:52.662 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:52.662 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:09:52.662 associated memzone info: size: 0.000183 MiB name: MP_msgpool_112483 00:09:52.662 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:09:52.662 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_112483 00:09:52.662 element at address: 0x20002846d500 with size: 0.000366 MiB 00:09:52.662 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:52.662 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:52.662 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 112483 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 112483 ']' 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 112483 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112483 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112483' 00:09:52.662 killing process with pid 112483 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 112483 00:09:52.662 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 112483 00:09:55.194 00:09:55.194 real 0m3.679s 00:09:55.194 user 0m3.702s 00:09:55.194 sys 0m0.565s 00:09:55.194 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:55.194 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:55.194 ************************************ 00:09:55.194 END TEST dpdk_mem_utility 00:09:55.194 ************************************ 00:09:55.194 11:21:29 -- common/autotest_common.sh@1142 -- # return 0 00:09:55.194 11:21:29 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:55.194 11:21:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:55.194 11:21:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.194 11:21:29 -- common/autotest_common.sh@10 -- # set +x 00:09:55.194 ************************************ 00:09:55.194 START TEST event 00:09:55.194 ************************************ 00:09:55.194 11:21:29 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:55.194 * Looking for test storage... 00:09:55.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:55.194 11:21:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:55.194 11:21:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:55.194 11:21:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:55.194 11:21:29 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:09:55.194 11:21:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:55.194 11:21:29 event -- common/autotest_common.sh@10 -- # set +x 00:09:55.194 ************************************ 00:09:55.194 START TEST event_perf 00:09:55.194 ************************************ 00:09:55.194 11:21:29 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:55.194 Running I/O for 1 seconds...[2024-07-13 11:21:29.656311] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:55.194 [2024-07-13 11:21:29.656679] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112590 ] 00:09:55.194 [2024-07-13 11:21:29.847978] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.452 [2024-07-13 11:21:30.124740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.452 [2024-07-13 11:21:30.124948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.452 [2024-07-13 11:21:30.125570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.452 [2024-07-13 11:21:30.125567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.823 Running I/O for 1 seconds... 00:09:56.823 lcore 0: 106813 00:09:56.823 lcore 1: 106813 00:09:56.823 lcore 2: 106815 00:09:56.823 lcore 3: 106815 00:09:56.823 done. 00:09:56.823 00:09:56.823 real 0m1.898s 00:09:56.823 user 0m4.604s 00:09:56.823 sys 0m0.177s 00:09:56.823 11:21:31 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.823 ************************************ 00:09:56.823 11:21:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:56.823 END TEST event_perf 00:09:56.823 ************************************ 00:09:56.823 11:21:31 event -- common/autotest_common.sh@1142 -- # return 0 00:09:56.823 11:21:31 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:56.823 11:21:31 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:56.823 11:21:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.823 11:21:31 event -- common/autotest_common.sh@10 -- # set +x 00:09:56.823 ************************************ 00:09:56.823 START TEST event_reactor 00:09:56.823 ************************************ 00:09:56.823 11:21:31 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:57.080 [2024-07-13 11:21:31.595117] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:57.081 [2024-07-13 11:21:31.595476] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112661 ] 00:09:57.081 [2024-07-13 11:21:31.748965] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.339 [2024-07-13 11:21:31.943613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.712 test_start 00:09:58.712 oneshot 00:09:58.712 tick 100 00:09:58.712 tick 100 00:09:58.712 tick 250 00:09:58.712 tick 100 00:09:58.712 tick 100 00:09:58.712 tick 250 00:09:58.712 tick 100 00:09:58.712 tick 500 00:09:58.712 tick 100 00:09:58.712 tick 100 00:09:58.712 tick 250 00:09:58.712 tick 100 00:09:58.712 tick 100 00:09:58.712 test_end 00:09:58.712 00:09:58.712 real 0m1.859s 00:09:58.712 user 0m1.627s 00:09:58.712 sys 0m0.129s 00:09:58.712 11:21:33 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:58.712 11:21:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:58.712 ************************************ 00:09:58.712 END TEST event_reactor 00:09:58.712 ************************************ 00:09:58.978 11:21:33 event -- common/autotest_common.sh@1142 -- # return 0 00:09:58.978 11:21:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:58.978 11:21:33 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:58.978 11:21:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:58.978 11:21:33 event -- common/autotest_common.sh@10 -- # set +x 00:09:58.978 ************************************ 00:09:58.978 START TEST event_reactor_perf 00:09:58.979 ************************************ 00:09:58.979 11:21:33 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:58.979 [2024-07-13 11:21:33.517891] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:58.979 [2024-07-13 11:21:33.518264] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112699 ] 00:09:58.979 [2024-07-13 11:21:33.693374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.264 [2024-07-13 11:21:33.968909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.643 test_start 00:10:00.643 test_end 00:10:00.643 Performance: 376608 events per second 00:10:00.643 00:10:00.643 real 0m1.847s 00:10:00.643 user 0m1.596s 00:10:00.643 sys 0m0.148s 00:10:00.643 11:21:35 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.643 11:21:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:00.643 ************************************ 00:10:00.643 END TEST event_reactor_perf 00:10:00.643 ************************************ 00:10:00.643 11:21:35 event -- common/autotest_common.sh@1142 -- # return 0 00:10:00.643 11:21:35 event -- event/event.sh@49 -- # uname -s 00:10:00.643 11:21:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:00.643 11:21:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:00.643 11:21:35 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:00.644 11:21:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.644 11:21:35 event -- common/autotest_common.sh@10 -- # set +x 00:10:00.644 ************************************ 00:10:00.644 START TEST event_scheduler 00:10:00.644 ************************************ 00:10:00.644 11:21:35 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:00.902 * Looking for test storage... 00:10:00.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:00.902 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:00.902 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=112777 00:10:00.902 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:00.902 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:00.902 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 112777 00:10:00.902 11:21:35 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 112777 ']' 00:10:00.902 11:21:35 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.902 11:21:35 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.902 11:21:35 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.902 11:21:35 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.902 11:21:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:00.902 [2024-07-13 11:21:35.529797] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:00.902 [2024-07-13 11:21:35.530235] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112777 ] 00:10:01.161 [2024-07-13 11:21:35.717694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.419 [2024-07-13 11:21:35.940438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.419 [2024-07-13 11:21:35.940600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.419 [2024-07-13 11:21:35.940691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.419 [2024-07-13 11:21:35.940692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.987 11:21:36 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.987 11:21:36 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:10:01.987 11:21:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:01.987 11:21:36 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.987 11:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:01.987 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:01.987 POWER: Cannot set governor of lcore 0 to userspace 00:10:01.987 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:01.987 POWER: Cannot set governor of lcore 0 to performance 00:10:01.987 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:01.987 POWER: Cannot set governor of lcore 0 to userspace 00:10:01.987 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:01.987 POWER: Cannot set governor of lcore 0 to userspace 00:10:01.987 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:10:01.987 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:01.987 POWER: Unable to set Power Management Environment for lcore 0 00:10:01.987 [2024-07-13 11:21:36.510989] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:10:01.987 [2024-07-13 11:21:36.511179] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:10:01.987 [2024-07-13 11:21:36.511285] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:10:01.987 [2024-07-13 11:21:36.511388] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:01.987 [2024-07-13 11:21:36.511563] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:01.987 [2024-07-13 11:21:36.511620] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:01.987 11:21:36 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.987 11:21:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:01.987 11:21:36 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.987 11:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 [2024-07-13 11:21:36.807488] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:02.246 11:21:36 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:02.246 11:21:36 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:02.246 11:21:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 ************************************ 00:10:02.246 START TEST scheduler_create_thread 00:10:02.246 ************************************ 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 2 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 3 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 4 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 5 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 6 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 7 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 8 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 9 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 10 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.246 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:03.181 11:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:03.181 11:21:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:03.181 11:21:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:03.181 11:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:03.181 11:21:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:04.557 11:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:04.557 00:10:04.557 real 0m2.138s 00:10:04.557 user 0m0.007s 00:10:04.557 sys 0m0.003s 00:10:04.557 11:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.557 ************************************ 00:10:04.557 END TEST scheduler_create_thread 00:10:04.557 ************************************ 00:10:04.557 11:21:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:04.557 11:21:38 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:10:04.557 11:21:38 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:04.557 11:21:38 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 112777 00:10:04.557 11:21:38 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 112777 ']' 00:10:04.557 11:21:38 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 112777 00:10:04.557 11:21:38 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:10:04.557 11:21:39 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:04.557 11:21:39 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112777 00:10:04.557 11:21:39 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:10:04.557 11:21:39 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:10:04.557 killing process with pid 112777 00:10:04.557 11:21:39 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112777' 00:10:04.557 11:21:39 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 112777 00:10:04.557 11:21:39 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 112777 00:10:04.816 [2024-07-13 11:21:39.439508] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:06.192 ************************************ 00:10:06.192 END TEST event_scheduler 00:10:06.192 ************************************ 00:10:06.192 00:10:06.192 real 0m5.130s 00:10:06.192 user 0m8.556s 00:10:06.192 sys 0m0.464s 00:10:06.192 11:21:40 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.192 11:21:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 11:21:40 event -- common/autotest_common.sh@1142 -- # return 0 00:10:06.192 11:21:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:06.192 11:21:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:06.192 11:21:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:06.192 11:21:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.192 11:21:40 event -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 ************************************ 00:10:06.192 START TEST app_repeat 00:10:06.192 ************************************ 00:10:06.192 11:21:40 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=112900 00:10:06.192 Process app_repeat pid: 112900 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112900' 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:06.192 spdk_app_start Round 0 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:06.192 11:21:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112900 /var/tmp/spdk-nbd.sock 00:10:06.192 11:21:40 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 112900 ']' 00:10:06.192 11:21:40 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:06.192 11:21:40 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:06.192 11:21:40 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:06.192 11:21:40 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.192 11:21:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:06.192 [2024-07-13 11:21:40.633546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:06.192 [2024-07-13 11:21:40.633795] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112900 ] 00:10:06.192 [2024-07-13 11:21:40.810012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.450 [2024-07-13 11:21:40.997971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.450 [2024-07-13 11:21:40.997983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.016 11:21:41 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:07.016 11:21:41 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:07.016 11:21:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:07.016 Malloc0 00:10:07.274 11:21:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:07.274 Malloc1 00:10:07.274 11:21:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:07.274 11:21:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.274 11:21:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:07.532 /dev/nbd0 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:07.532 1+0 records in 00:10:07.532 1+0 records out 00:10:07.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398662 s, 10.3 MB/s 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:07.532 11:21:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:07.532 11:21:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:07.819 /dev/nbd1 00:10:07.819 11:21:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:07.819 11:21:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:07.819 1+0 records in 00:10:07.819 1+0 records out 00:10:07.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289952 s, 14.1 MB/s 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:07.819 11:21:42 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:07.819 11:21:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:07.819 11:21:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:07.819 11:21:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:07.819 11:21:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:07.819 11:21:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:08.077 { 00:10:08.077 "nbd_device": "/dev/nbd0", 00:10:08.077 "bdev_name": "Malloc0" 00:10:08.077 }, 00:10:08.077 { 00:10:08.077 "nbd_device": "/dev/nbd1", 00:10:08.077 "bdev_name": "Malloc1" 00:10:08.077 } 00:10:08.077 ]' 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:08.077 { 00:10:08.077 "nbd_device": "/dev/nbd0", 00:10:08.077 "bdev_name": "Malloc0" 00:10:08.077 }, 00:10:08.077 { 00:10:08.077 "nbd_device": "/dev/nbd1", 00:10:08.077 "bdev_name": "Malloc1" 00:10:08.077 } 00:10:08.077 ]' 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:08.077 /dev/nbd1' 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:08.077 /dev/nbd1' 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:08.077 256+0 records in 00:10:08.077 256+0 records out 00:10:08.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00362177 s, 290 MB/s 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:08.077 11:21:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:08.334 256+0 records in 00:10:08.334 256+0 records out 00:10:08.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263522 s, 39.8 MB/s 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:08.334 256+0 records in 00:10:08.334 256+0 records out 00:10:08.334 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316771 s, 33.1 MB/s 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.334 11:21:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:08.593 11:21:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:08.851 11:21:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:09.109 11:21:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:09.109 11:21:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:09.676 11:21:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:10.609 [2024-07-13 11:21:45.267849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:10.868 [2024-07-13 11:21:45.438260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.868 [2024-07-13 11:21:45.438264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.127 [2024-07-13 11:21:45.616836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:11.127 [2024-07-13 11:21:45.616933] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:12.503 11:21:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:12.503 11:21:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:12.503 spdk_app_start Round 1 00:10:12.503 11:21:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112900 /var/tmp/spdk-nbd.sock 00:10:12.503 11:21:47 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 112900 ']' 00:10:12.503 11:21:47 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:12.503 11:21:47 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:12.503 11:21:47 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:12.503 11:21:47 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.503 11:21:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:12.761 11:21:47 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.761 11:21:47 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:12.761 11:21:47 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:13.019 Malloc0 00:10:13.019 11:21:47 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:13.278 Malloc1 00:10:13.536 11:21:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:13.536 /dev/nbd0 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:13.536 1+0 records in 00:10:13.536 1+0 records out 00:10:13.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541984 s, 7.6 MB/s 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:13.536 11:21:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:13.536 11:21:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:13.794 /dev/nbd1 00:10:13.794 11:21:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:13.794 11:21:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:13.794 1+0 records in 00:10:13.794 1+0 records out 00:10:13.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245191 s, 16.7 MB/s 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:13.794 11:21:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:13.794 11:21:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:13.794 11:21:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:13.794 11:21:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:13.794 11:21:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:13.794 11:21:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:14.051 11:21:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:14.051 { 00:10:14.051 "nbd_device": "/dev/nbd0", 00:10:14.051 "bdev_name": "Malloc0" 00:10:14.051 }, 00:10:14.051 { 00:10:14.051 "nbd_device": "/dev/nbd1", 00:10:14.051 "bdev_name": "Malloc1" 00:10:14.051 } 00:10:14.051 ]' 00:10:14.051 11:21:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:14.051 { 00:10:14.051 "nbd_device": "/dev/nbd0", 00:10:14.051 "bdev_name": "Malloc0" 00:10:14.051 }, 00:10:14.051 { 00:10:14.052 "nbd_device": "/dev/nbd1", 00:10:14.052 "bdev_name": "Malloc1" 00:10:14.052 } 00:10:14.052 ]' 00:10:14.052 11:21:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:14.310 /dev/nbd1' 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:14.310 /dev/nbd1' 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:14.310 256+0 records in 00:10:14.310 256+0 records out 00:10:14.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00772717 s, 136 MB/s 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:14.310 256+0 records in 00:10:14.310 256+0 records out 00:10:14.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264738 s, 39.6 MB/s 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:14.310 256+0 records in 00:10:14.310 256+0 records out 00:10:14.310 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0341911 s, 30.7 MB/s 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.310 11:21:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:14.570 11:21:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:14.828 11:21:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:14.828 11:21:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:14.828 11:21:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:14.828 11:21:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:14.828 11:21:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:14.828 11:21:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:14.828 11:21:49 event.app_repeat -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:10:15.086 11:21:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:10:15.086 11:21:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:15.086 11:21:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:15.086 11:21:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:15.086 11:21:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:15.086 11:21:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:15.086 11:21:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:15.086 11:21:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:15.345 11:21:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:15.345 11:21:49 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:15.912 11:21:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:16.847 [2024-07-13 11:21:51.474639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:17.107 [2024-07-13 11:21:51.693220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.107 [2024-07-13 11:21:51.693228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.365 [2024-07-13 11:21:51.875335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:17.365 [2024-07-13 11:21:51.875470] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:18.756 spdk_app_start Round 2 00:10:18.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:18.756 11:21:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:18.756 11:21:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:18.756 11:21:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112900 /var/tmp/spdk-nbd.sock 00:10:18.756 11:21:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 112900 ']' 00:10:18.756 11:21:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:18.756 11:21:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:18.756 11:21:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:18.756 11:21:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:18.756 11:21:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:19.014 11:21:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:19.014 11:21:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:19.014 11:21:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:19.272 Malloc0 00:10:19.272 11:21:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:19.532 Malloc1 00:10:19.532 11:21:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:19.532 11:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:19.790 /dev/nbd0 00:10:19.790 11:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:19.790 11:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:19.790 1+0 records in 00:10:19.790 1+0 records out 00:10:19.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038832 s, 10.5 MB/s 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:19.790 11:21:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:19.790 11:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:19.790 11:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:19.790 11:21:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:20.048 /dev/nbd1 00:10:20.048 11:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:20.048 11:21:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:20.048 1+0 records in 00:10:20.048 1+0 records out 00:10:20.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495496 s, 8.3 MB/s 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:10:20.048 11:21:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:10:20.048 11:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:20.048 11:21:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:20.048 11:21:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:20.048 11:21:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.048 11:21:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:20.307 11:21:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:20.307 { 00:10:20.307 "nbd_device": "/dev/nbd0", 00:10:20.307 "bdev_name": "Malloc0" 00:10:20.307 }, 00:10:20.307 { 00:10:20.307 "nbd_device": "/dev/nbd1", 00:10:20.307 "bdev_name": "Malloc1" 00:10:20.307 } 00:10:20.307 ]' 00:10:20.307 11:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:20.307 { 00:10:20.307 "nbd_device": "/dev/nbd0", 00:10:20.307 "bdev_name": "Malloc0" 00:10:20.307 }, 00:10:20.307 { 00:10:20.307 "nbd_device": "/dev/nbd1", 00:10:20.307 "bdev_name": "Malloc1" 00:10:20.307 } 00:10:20.307 ]' 00:10:20.307 11:21:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:20.307 11:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:20.307 /dev/nbd1' 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:20.567 /dev/nbd1' 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:20.567 256+0 records in 00:10:20.567 256+0 records out 00:10:20.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655377 s, 160 MB/s 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:20.567 256+0 records in 00:10:20.567 256+0 records out 00:10:20.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256111 s, 40.9 MB/s 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:20.567 256+0 records in 00:10:20.567 256+0 records out 00:10:20.567 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0344397 s, 30.4 MB/s 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.567 11:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:20.827 11:21:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:21.086 11:21:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:21.087 11:21:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:21.087 11:21:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:21.087 11:21:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:21.087 11:21:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.087 11:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:21.345 11:21:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:21.345 11:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:21.345 11:21:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:21.345 11:21:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:21.345 11:21:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:21.345 11:21:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:21.345 11:21:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:21.345 11:21:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:21.345 11:21:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:21.345 11:21:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:21.345 11:21:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:21.346 11:21:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:21.346 11:21:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:21.914 11:21:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:22.850 [2024-07-13 11:21:57.507774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:23.108 [2024-07-13 11:21:57.684634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.108 [2024-07-13 11:21:57.684639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.367 [2024-07-13 11:21:57.864721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:23.367 [2024-07-13 11:21:57.864836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:24.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:24.742 11:21:59 event.app_repeat -- event/event.sh@38 -- # waitforlisten 112900 /var/tmp/spdk-nbd.sock 00:10:24.742 11:21:59 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 112900 ']' 00:10:24.742 11:21:59 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:24.742 11:21:59 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.742 11:21:59 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:24.742 11:21:59 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.742 11:21:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:10:25.003 11:21:59 event.app_repeat -- event/event.sh@39 -- # killprocess 112900 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 112900 ']' 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 112900 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 112900 00:10:25.003 killing process with pid 112900 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 112900' 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@967 -- # kill 112900 00:10:25.003 11:21:59 event.app_repeat -- common/autotest_common.sh@972 -- # wait 112900 00:10:26.393 spdk_app_start is called in Round 0. 00:10:26.393 Shutdown signal received, stop current app iteration 00:10:26.393 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:10:26.393 spdk_app_start is called in Round 1. 00:10:26.393 Shutdown signal received, stop current app iteration 00:10:26.393 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:10:26.393 spdk_app_start is called in Round 2. 00:10:26.393 Shutdown signal received, stop current app iteration 00:10:26.393 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:10:26.393 spdk_app_start is called in Round 3. 00:10:26.393 Shutdown signal received, stop current app iteration 00:10:26.393 11:22:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:26.393 11:22:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:26.393 00:10:26.393 real 0m20.159s 00:10:26.393 user 0m43.017s 00:10:26.393 sys 0m2.720s 00:10:26.393 11:22:00 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:26.393 ************************************ 00:10:26.393 END TEST app_repeat 00:10:26.393 ************************************ 00:10:26.393 11:22:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:26.393 11:22:00 event -- common/autotest_common.sh@1142 -- # return 0 00:10:26.393 11:22:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:26.393 11:22:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:26.393 11:22:00 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:26.393 11:22:00 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.393 11:22:00 event -- common/autotest_common.sh@10 -- # set +x 00:10:26.393 ************************************ 00:10:26.393 START TEST cpu_locks 00:10:26.393 ************************************ 00:10:26.393 11:22:00 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:26.393 * Looking for test storage... 00:10:26.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:26.393 11:22:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:26.393 11:22:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:26.393 11:22:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:26.393 11:22:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:26.393 11:22:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:26.393 11:22:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:26.393 11:22:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:26.393 ************************************ 00:10:26.393 START TEST default_locks 00:10:26.393 ************************************ 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=113461 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 113461 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 113461 ']' 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:26.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:26.393 11:22:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:26.393 [2024-07-13 11:22:00.955251] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:26.394 [2024-07-13 11:22:00.955677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113461 ] 00:10:26.394 [2024-07-13 11:22:01.115241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.659 [2024-07-13 11:22:01.315999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.594 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:27.594 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:10:27.594 11:22:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 113461 00:10:27.594 11:22:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 113461 00:10:27.594 11:22:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 113461 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 113461 ']' 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 113461 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113461 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:27.852 killing process with pid 113461 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113461' 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 113461 00:10:27.852 11:22:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 113461 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 113461 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 113461 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 113461 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 113461 ']' 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:29.752 ERROR: process (pid: 113461) is no longer running 00:10:29.752 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (113461) - No such process 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:29.752 00:10:29.752 real 0m3.507s 00:10:29.752 user 0m3.491s 00:10:29.752 sys 0m0.721s 00:10:29.752 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:29.752 ************************************ 00:10:29.752 END TEST default_locks 00:10:29.753 ************************************ 00:10:29.753 11:22:04 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:29.753 11:22:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:29.753 11:22:04 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:29.753 11:22:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:29.753 11:22:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.753 11:22:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:29.753 ************************************ 00:10:29.753 START TEST default_locks_via_rpc 00:10:29.753 ************************************ 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=113565 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 113565 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 113565 ']' 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.753 11:22:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:30.011 [2024-07-13 11:22:04.535080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:30.011 [2024-07-13 11:22:04.535518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113565 ] 00:10:30.011 [2024-07-13 11:22:04.707001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.270 [2024-07-13 11:22:04.899923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 113565 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 113565 00:10:31.206 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 113565 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 113565 ']' 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 113565 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113565 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:31.464 killing process with pid 113565 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113565' 00:10:31.464 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 113565 00:10:31.465 11:22:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 113565 00:10:33.366 00:10:33.366 real 0m3.524s 00:10:33.366 user 0m3.376s 00:10:33.366 sys 0m0.767s 00:10:33.366 11:22:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.366 ************************************ 00:10:33.366 END TEST default_locks_via_rpc 00:10:33.366 ************************************ 00:10:33.366 11:22:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:33.366 11:22:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:33.366 11:22:08 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:33.366 11:22:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:33.366 11:22:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.366 11:22:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:33.366 ************************************ 00:10:33.366 START TEST non_locking_app_on_locked_coremask 00:10:33.366 ************************************ 00:10:33.366 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:10:33.366 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=113636 00:10:33.366 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 113636 /var/tmp/spdk.sock 00:10:33.366 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:33.366 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 113636 ']' 00:10:33.366 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.367 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.367 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.367 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.367 11:22:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:33.367 [2024-07-13 11:22:08.097895] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:33.367 [2024-07-13 11:22:08.098085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113636 ] 00:10:33.626 [2024-07-13 11:22:08.254840] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.885 [2024-07-13 11:22:08.448944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=113657 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 113657 /var/tmp/spdk2.sock 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 113657 ']' 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:34.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:34.452 11:22:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:34.711 [2024-07-13 11:22:09.272922] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:34.711 [2024-07-13 11:22:09.273395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113657 ] 00:10:34.711 [2024-07-13 11:22:09.438791] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:34.711 [2024-07-13 11:22:09.438894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.279 [2024-07-13 11:22:09.835948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.809 11:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.809 11:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:37.809 11:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 113636 00:10:37.809 11:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 113636 00:10:37.809 11:22:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 113636 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 113636 ']' 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 113636 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113636 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:37.809 killing process with pid 113636 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113636' 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 113636 00:10:37.809 11:22:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 113636 00:10:42.030 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 113657 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 113657 ']' 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 113657 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113657 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:42.031 killing process with pid 113657 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113657' 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 113657 00:10:42.031 11:22:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 113657 00:10:43.928 00:10:43.928 real 0m10.403s 00:10:43.928 user 0m10.716s 00:10:43.928 sys 0m1.472s 00:10:43.928 ************************************ 00:10:43.928 END TEST non_locking_app_on_locked_coremask 00:10:43.928 ************************************ 00:10:43.928 11:22:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.928 11:22:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 11:22:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:43.928 11:22:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:43.928 11:22:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:43.928 11:22:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.928 11:22:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 ************************************ 00:10:43.928 START TEST locking_app_on_unlocked_coremask 00:10:43.928 ************************************ 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=113824 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 113824 /var/tmp/spdk.sock 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 113824 ']' 00:10:43.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:43.928 11:22:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:43.928 [2024-07-13 11:22:18.579201] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:43.928 [2024-07-13 11:22:18.579461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113824 ] 00:10:44.186 [2024-07-13 11:22:18.748099] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:44.186 [2024-07-13 11:22:18.748166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.445 [2024-07-13 11:22:18.937264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=113845 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 113845 /var/tmp/spdk2.sock 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 113845 ']' 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:45.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:45.012 11:22:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:45.270 [2024-07-13 11:22:19.768848] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:45.270 [2024-07-13 11:22:19.769330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113845 ] 00:10:45.271 [2024-07-13 11:22:19.934865] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.839 [2024-07-13 11:22:20.352293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.373 11:22:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:48.373 11:22:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:48.373 11:22:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 113845 00:10:48.373 11:22:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 113845 00:10:48.373 11:22:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 113824 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 113824 ']' 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 113824 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113824 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113824' 00:10:48.373 killing process with pid 113824 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 113824 00:10:48.373 11:22:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 113824 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 113845 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 113845 ']' 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 113845 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 113845 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:52.561 killing process with pid 113845 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 113845' 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 113845 00:10:52.561 11:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 113845 00:10:54.463 00:10:54.463 real 0m10.586s 00:10:54.463 user 0m10.970s 00:10:54.463 sys 0m1.498s 00:10:54.463 11:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:54.463 11:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.463 ************************************ 00:10:54.463 END TEST locking_app_on_unlocked_coremask 00:10:54.463 ************************************ 00:10:54.463 11:22:29 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:54.463 11:22:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:54.463 11:22:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:54.463 11:22:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.463 11:22:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:54.463 ************************************ 00:10:54.463 START TEST locking_app_on_locked_coremask 00:10:54.463 ************************************ 00:10:54.463 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:10:54.463 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=114014 00:10:54.463 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 114014 /var/tmp/spdk.sock 00:10:54.463 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114014 ']' 00:10:54.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.463 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.463 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.463 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.464 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.464 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.464 11:22:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:54.722 [2024-07-13 11:22:29.222053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:54.722 [2024-07-13 11:22:29.222528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114014 ] 00:10:54.722 [2024-07-13 11:22:29.391199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.982 [2024-07-13 11:22:29.577108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=114035 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 114035 /var/tmp/spdk2.sock 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 114035 /var/tmp/spdk2.sock 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 114035 /var/tmp/spdk2.sock 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 114035 ']' 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.929 11:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:55.929 [2024-07-13 11:22:30.417204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:55.929 [2024-07-13 11:22:30.417670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114035 ] 00:10:55.929 [2024-07-13 11:22:30.578462] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 114014 has claimed it. 00:10:55.929 [2024-07-13 11:22:30.578549] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:56.514 ERROR: process (pid: 114035) is no longer running 00:10:56.514 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (114035) - No such process 00:10:56.514 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:56.514 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:10:56.514 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:10:56.514 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:56.514 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:56.514 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:56.514 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 114014 00:10:56.514 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 114014 00:10:56.514 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 114014 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 114014 ']' 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 114014 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114014 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:56.773 killing process with pid 114014 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114014' 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 114014 00:10:56.773 11:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 114014 00:10:58.677 00:10:58.677 real 0m4.109s 00:10:58.677 user 0m4.255s 00:10:58.677 sys 0m0.762s 00:10:58.677 11:22:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:58.677 11:22:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:58.677 ************************************ 00:10:58.677 END TEST locking_app_on_locked_coremask 00:10:58.677 ************************************ 00:10:58.677 11:22:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:10:58.677 11:22:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:58.677 11:22:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:58.677 11:22:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.677 11:22:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:58.677 ************************************ 00:10:58.677 START TEST locking_overlapped_coremask 00:10:58.677 ************************************ 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=114113 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 114113 /var/tmp/spdk.sock 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 114113 ']' 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:58.677 11:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:58.677 [2024-07-13 11:22:33.384215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:58.677 [2024-07-13 11:22:33.384460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114113 ] 00:10:58.935 [2024-07-13 11:22:33.571469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:59.193 [2024-07-13 11:22:33.768764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.193 [2024-07-13 11:22:33.768916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.193 [2024-07-13 11:22:33.768916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=114136 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 114136 /var/tmp/spdk2.sock 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 114136 /var/tmp/spdk2.sock 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 114136 /var/tmp/spdk2.sock 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 114136 ']' 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:00.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.129 11:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:00.129 [2024-07-13 11:22:34.622701] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:00.129 [2024-07-13 11:22:34.623708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114136 ] 00:11:00.129 [2024-07-13 11:22:34.808915] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114113 has claimed it. 00:11:00.129 [2024-07-13 11:22:34.809002] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:11:00.696 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (114136) - No such process 00:11:00.696 ERROR: process (pid: 114136) is no longer running 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 114113 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 114113 ']' 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 114113 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114113 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114113' 00:11:00.696 killing process with pid 114113 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 114113 00:11:00.696 11:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 114113 00:11:02.599 00:11:02.599 real 0m3.972s 00:11:02.599 user 0m10.286s 00:11:02.599 sys 0m0.725s 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:11:02.599 ************************************ 00:11:02.599 END TEST locking_overlapped_coremask 00:11:02.599 ************************************ 00:11:02.599 11:22:37 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:02.599 11:22:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:11:02.599 11:22:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:02.599 11:22:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.599 11:22:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:02.599 ************************************ 00:11:02.599 START TEST locking_overlapped_coremask_via_rpc 00:11:02.599 ************************************ 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=114205 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 114205 /var/tmp/spdk.sock 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114205 ']' 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:02.599 11:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.858 [2024-07-13 11:22:37.395415] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:02.858 [2024-07-13 11:22:37.395597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114205 ] 00:11:02.858 [2024-07-13 11:22:37.559491] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:02.858 [2024-07-13 11:22:37.559548] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.116 [2024-07-13 11:22:37.746432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.116 [2024-07-13 11:22:37.746572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.116 [2024-07-13 11:22:37.746602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=114229 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 114229 /var/tmp/spdk2.sock 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114229 ']' 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:04.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.051 11:22:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:11:04.051 [2024-07-13 11:22:38.562903] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:04.051 [2024-07-13 11:22:38.563967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114229 ] 00:11:04.051 [2024-07-13 11:22:38.752874] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:11:04.051 [2024-07-13 11:22:38.752926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:04.621 [2024-07-13 11:22:39.126428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.621 [2024-07-13 11:22:39.139039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.621 [2024-07-13 11:22:39.139042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.520 [2024-07-13 11:22:41.255319] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 114205 has claimed it. 00:11:06.520 request: 00:11:06.520 { 00:11:06.520 "method": "framework_enable_cpumask_locks", 00:11:06.520 "req_id": 1 00:11:06.520 } 00:11:06.520 Got JSON-RPC error response 00:11:06.520 response: 00:11:06.520 { 00:11:06.520 "code": -32603, 00:11:06.520 "message": "Failed to claim CPU core: 2" 00:11:06.520 } 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 114205 /var/tmp/spdk.sock 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114205 ']' 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.520 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.779 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.779 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:06.779 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 114229 /var/tmp/spdk2.sock 00:11:06.779 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 114229 ']' 00:11:06.779 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:11:06.779 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:11:06.779 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:11:06.779 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.779 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.037 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.037 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:07.037 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:11:07.037 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:11:07.037 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:11:07.037 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:11:07.037 ************************************ 00:11:07.037 END TEST locking_overlapped_coremask_via_rpc 00:11:07.037 ************************************ 00:11:07.037 00:11:07.037 real 0m4.454s 00:11:07.037 user 0m1.465s 00:11:07.037 sys 0m0.216s 00:11:07.037 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:07.037 11:22:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:11:07.295 11:22:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:11:07.295 11:22:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 114205 ]] 00:11:07.295 11:22:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 114205 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114205 ']' 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114205 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114205 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114205' 00:11:07.295 killing process with pid 114205 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 114205 00:11:07.295 11:22:41 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 114205 00:11:09.822 11:22:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 114229 ]] 00:11:09.822 11:22:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 114229 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114229 ']' 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114229 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114229 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:11:09.822 killing process with pid 114229 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114229' 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 114229 00:11:09.822 11:22:43 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 114229 00:11:11.195 11:22:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:11.195 11:22:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:11:11.195 11:22:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 114205 ]] 00:11:11.195 11:22:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 114205 00:11:11.195 11:22:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114205 ']' 00:11:11.195 11:22:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114205 00:11:11.195 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (114205) - No such process 00:11:11.195 Process with pid 114205 is not found 00:11:11.195 11:22:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 114205 is not found' 00:11:11.195 Process with pid 114229 is not found 00:11:11.195 11:22:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 114229 ]] 00:11:11.195 11:22:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 114229 00:11:11.195 11:22:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 114229 ']' 00:11:11.195 11:22:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 114229 00:11:11.195 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (114229) - No such process 00:11:11.195 11:22:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 114229 is not found' 00:11:11.195 11:22:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:11:11.195 00:11:11.195 real 0m45.017s 00:11:11.195 user 1m17.074s 00:11:11.195 sys 0m7.198s 00:11:11.195 11:22:45 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.195 11:22:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:11:11.195 ************************************ 00:11:11.195 END TEST cpu_locks 00:11:11.195 ************************************ 00:11:11.195 11:22:45 event -- common/autotest_common.sh@1142 -- # return 0 00:11:11.195 00:11:11.195 real 1m16.328s 00:11:11.195 user 2m16.687s 00:11:11.195 sys 0m11.007s 00:11:11.195 11:22:45 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.195 ************************************ 00:11:11.195 END TEST event 00:11:11.195 11:22:45 event -- common/autotest_common.sh@10 -- # set +x 00:11:11.195 ************************************ 00:11:11.195 11:22:45 -- common/autotest_common.sh@1142 -- # return 0 00:11:11.195 11:22:45 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:11.195 11:22:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:11.195 11:22:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.195 11:22:45 -- common/autotest_common.sh@10 -- # set +x 00:11:11.195 ************************************ 00:11:11.195 START TEST thread 00:11:11.195 ************************************ 00:11:11.195 11:22:45 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:11:11.453 * Looking for test storage... 00:11:11.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:11:11.453 11:22:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:11.453 11:22:45 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:11.453 11:22:45 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.453 11:22:45 thread -- common/autotest_common.sh@10 -- # set +x 00:11:11.453 ************************************ 00:11:11.453 START TEST thread_poller_perf 00:11:11.453 ************************************ 00:11:11.453 11:22:45 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:11:11.453 [2024-07-13 11:22:46.016042] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:11.453 [2024-07-13 11:22:46.016220] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114439 ] 00:11:11.453 [2024-07-13 11:22:46.191660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.712 [2024-07-13 11:22:46.422429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.712 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:13.086 ====================================== 00:11:13.086 busy:2206923280 (cyc) 00:11:13.086 total_run_count: 391000 00:11:13.086 tsc_hz: 2200000000 (cyc) 00:11:13.086 ====================================== 00:11:13.086 poller_cost: 5644 (cyc), 2565 (nsec) 00:11:13.086 ************************************ 00:11:13.086 END TEST thread_poller_perf 00:11:13.086 ************************************ 00:11:13.086 00:11:13.086 real 0m1.795s 00:11:13.086 user 0m1.549s 00:11:13.086 sys 0m0.132s 00:11:13.086 11:22:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.086 11:22:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:13.086 11:22:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:13.086 11:22:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:13.086 11:22:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:13.086 11:22:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.086 11:22:47 thread -- common/autotest_common.sh@10 -- # set +x 00:11:13.344 ************************************ 00:11:13.344 START TEST thread_poller_perf 00:11:13.344 ************************************ 00:11:13.344 11:22:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:13.344 [2024-07-13 11:22:47.864396] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:13.344 [2024-07-13 11:22:47.864630] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114484 ] 00:11:13.344 [2024-07-13 11:22:48.019045] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.603 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:13.603 [2024-07-13 11:22:48.224008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.978 ====================================== 00:11:14.978 busy:2203462704 (cyc) 00:11:14.978 total_run_count: 4926000 00:11:14.978 tsc_hz: 2200000000 (cyc) 00:11:14.978 ====================================== 00:11:14.978 poller_cost: 447 (cyc), 203 (nsec) 00:11:14.978 00:11:14.978 real 0m1.744s 00:11:14.978 user 0m1.536s 00:11:14.978 sys 0m0.108s 00:11:14.978 11:22:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.978 ************************************ 00:11:14.978 END TEST thread_poller_perf 00:11:14.978 ************************************ 00:11:14.978 11:22:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:14.978 11:22:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:14.978 11:22:49 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:14.978 11:22:49 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:14.978 11:22:49 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:14.978 11:22:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.978 11:22:49 thread -- common/autotest_common.sh@10 -- # set +x 00:11:14.978 ************************************ 00:11:14.978 START TEST thread_spdk_lock 00:11:14.978 ************************************ 00:11:14.978 11:22:49 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:14.978 [2024-07-13 11:22:49.674879] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:14.978 [2024-07-13 11:22:49.675237] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114532 ] 00:11:15.237 [2024-07-13 11:22:49.848750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:15.495 [2024-07-13 11:22:50.037486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.495 [2024-07-13 11:22:50.037485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.064 [2024-07-13 11:22:50.548583] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:16.064 [2024-07-13 11:22:50.548666] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:16.064 [2024-07-13 11:22:50.548706] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x56519b9e4b40 00:11:16.064 [2024-07-13 11:22:50.555491] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:16.064 [2024-07-13 11:22:50.555610] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:16.064 [2024-07-13 11:22:50.555643] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:16.337 Starting test contend 00:11:16.337 Worker Delay Wait us Hold us Total us 00:11:16.337 0 3 139331 190649 329981 00:11:16.337 1 5 57180 295244 352424 00:11:16.337 PASS test contend 00:11:16.337 Starting test hold_by_poller 00:11:16.337 PASS test hold_by_poller 00:11:16.337 Starting test hold_by_message 00:11:16.337 PASS test hold_by_message 00:11:16.337 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:16.337 100014 assertions passed 00:11:16.337 0 assertions failed 00:11:16.337 00:11:16.337 real 0m1.278s 00:11:16.337 user 0m1.549s 00:11:16.337 sys 0m0.143s 00:11:16.337 11:22:50 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:16.337 11:22:50 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:11:16.337 ************************************ 00:11:16.337 END TEST thread_spdk_lock 00:11:16.337 ************************************ 00:11:16.337 11:22:50 thread -- common/autotest_common.sh@1142 -- # return 0 00:11:16.337 00:11:16.337 real 0m5.052s 00:11:16.337 user 0m4.753s 00:11:16.337 sys 0m0.489s 00:11:16.338 11:22:50 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:16.338 11:22:50 thread -- common/autotest_common.sh@10 -- # set +x 00:11:16.338 ************************************ 00:11:16.338 END TEST thread 00:11:16.338 ************************************ 00:11:16.338 11:22:50 -- common/autotest_common.sh@1142 -- # return 0 00:11:16.338 11:22:50 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:16.338 11:22:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:16.338 11:22:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.338 11:22:50 -- common/autotest_common.sh@10 -- # set +x 00:11:16.338 ************************************ 00:11:16.338 START TEST accel 00:11:16.338 ************************************ 00:11:16.338 11:22:50 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:16.338 * Looking for test storage... 00:11:16.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:16.630 11:22:51 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:11:16.630 11:22:51 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:11:16.630 11:22:51 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:16.630 11:22:51 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=114612 00:11:16.630 11:22:51 accel -- accel/accel.sh@63 -- # waitforlisten 114612 00:11:16.630 11:22:51 accel -- common/autotest_common.sh@829 -- # '[' -z 114612 ']' 00:11:16.630 11:22:51 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.630 11:22:51 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.630 11:22:51 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.630 11:22:51 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.630 11:22:51 accel -- common/autotest_common.sh@10 -- # set +x 00:11:16.630 11:22:51 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:16.630 11:22:51 accel -- accel/accel.sh@61 -- # build_accel_config 00:11:16.630 11:22:51 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:16.630 11:22:51 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:16.630 11:22:51 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:16.630 11:22:51 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:16.630 11:22:51 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:16.630 11:22:51 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:16.630 11:22:51 accel -- accel/accel.sh@41 -- # jq -r . 00:11:16.630 [2024-07-13 11:22:51.144013] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:16.630 [2024-07-13 11:22:51.144596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114612 ] 00:11:16.630 [2024-07-13 11:22:51.302198] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.890 [2024-07-13 11:22:51.495749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@862 -- # return 0 00:11:17.828 11:22:52 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:11:17.828 11:22:52 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:11:17.828 11:22:52 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:11:17.828 11:22:52 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:11:17.828 11:22:52 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:17.828 11:22:52 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:11:17.828 11:22:52 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@10 -- # set +x 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:11:17.828 11:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:17.828 11:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:17.828 11:22:52 accel -- accel/accel.sh@75 -- # killprocess 114612 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@948 -- # '[' -z 114612 ']' 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@952 -- # kill -0 114612 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@953 -- # uname 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 114612 00:11:17.828 killing process with pid 114612 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 114612' 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@967 -- # kill 114612 00:11:17.828 11:22:52 accel -- common/autotest_common.sh@972 -- # wait 114612 00:11:19.731 11:22:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:11:19.731 11:22:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:11:19.731 11:22:54 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:19.731 11:22:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.731 11:22:54 accel -- common/autotest_common.sh@10 -- # set +x 00:11:19.731 11:22:54 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:11:19.731 11:22:54 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:11:19.731 11:22:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:11:19.731 11:22:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:19.731 11:22:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:19.731 11:22:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.731 11:22:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.731 11:22:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:19.731 11:22:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:11:19.731 11:22:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:11:19.731 11:22:54 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:19.731 11:22:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:11:19.731 11:22:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:19.731 11:22:54 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:11:19.731 11:22:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:19.731 11:22:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.731 11:22:54 accel -- common/autotest_common.sh@10 -- # set +x 00:11:19.731 ************************************ 00:11:19.731 START TEST accel_missing_filename 00:11:19.731 ************************************ 00:11:19.731 11:22:54 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:11:19.731 11:22:54 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:11:19.731 11:22:54 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:11:19.731 11:22:54 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:19.731 11:22:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.731 11:22:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:19.731 11:22:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.731 11:22:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:11:19.731 11:22:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:11:19.731 11:22:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:11:19.731 11:22:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:19.731 11:22:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:19.731 11:22:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.731 11:22:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.731 11:22:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:19.731 11:22:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:11:19.731 11:22:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:11:19.731 [2024-07-13 11:22:54.414182] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:19.731 [2024-07-13 11:22:54.414354] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114722 ] 00:11:19.990 [2024-07-13 11:22:54.573684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.249 [2024-07-13 11:22:54.757674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.249 [2024-07-13 11:22:54.948988] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:20.817 [2024-07-13 11:22:55.381437] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:11:21.076 A filename is required. 00:11:21.076 11:22:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:11:21.076 11:22:55 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:21.076 11:22:55 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:11:21.076 11:22:55 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:11:21.076 11:22:55 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:11:21.076 ************************************ 00:11:21.076 END TEST accel_missing_filename 00:11:21.076 ************************************ 00:11:21.076 11:22:55 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:21.076 00:11:21.076 real 0m1.351s 00:11:21.076 user 0m1.118s 00:11:21.076 sys 0m0.191s 00:11:21.076 11:22:55 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:21.076 11:22:55 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:11:21.076 11:22:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:21.076 11:22:55 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:21.076 11:22:55 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:21.076 11:22:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.076 11:22:55 accel -- common/autotest_common.sh@10 -- # set +x 00:11:21.076 ************************************ 00:11:21.076 START TEST accel_compress_verify 00:11:21.076 ************************************ 00:11:21.076 11:22:55 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:21.076 11:22:55 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:11:21.076 11:22:55 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:21.076 11:22:55 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:21.076 11:22:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.076 11:22:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:21.076 11:22:55 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:21.076 11:22:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:21.076 11:22:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:21.076 11:22:55 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:21.076 11:22:55 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:21.076 11:22:55 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:21.076 11:22:55 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:21.076 11:22:55 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:21.076 11:22:55 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:21.076 11:22:55 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:21.076 11:22:55 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:11:21.076 [2024-07-13 11:22:55.818267] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:21.076 [2024-07-13 11:22:55.818458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114761 ] 00:11:21.335 [2024-07-13 11:22:55.969214] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.594 [2024-07-13 11:22:56.158436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.853 [2024-07-13 11:22:56.349771] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:22.112 [2024-07-13 11:22:56.780047] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:11:22.679 00:11:22.679 Compression does not support the verify option, aborting. 00:11:22.679 11:22:57 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:11:22.679 11:22:57 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.679 11:22:57 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:11:22.679 11:22:57 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:11:22.679 11:22:57 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:11:22.679 11:22:57 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.679 00:11:22.679 real 0m1.353s 00:11:22.679 user 0m1.115s 00:11:22.679 sys 0m0.197s 00:11:22.679 11:22:57 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.679 11:22:57 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:11:22.679 ************************************ 00:11:22.679 END TEST accel_compress_verify 00:11:22.679 ************************************ 00:11:22.679 11:22:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:22.679 11:22:57 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:11:22.679 11:22:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:22.679 11:22:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.679 11:22:57 accel -- common/autotest_common.sh@10 -- # set +x 00:11:22.679 ************************************ 00:11:22.679 START TEST accel_wrong_workload 00:11:22.679 ************************************ 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:11:22.679 11:22:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:11:22.679 11:22:57 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:11:22.679 11:22:57 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:22.679 11:22:57 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:22.679 11:22:57 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.679 11:22:57 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.679 11:22:57 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:22.679 11:22:57 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:11:22.679 11:22:57 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:11:22.679 Unsupported workload type: foobar 00:11:22.679 [2024-07-13 11:22:57.231218] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:11:22.679 accel_perf options: 00:11:22.679 [-h help message] 00:11:22.679 [-q queue depth per core] 00:11:22.679 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:22.679 [-T number of threads per core 00:11:22.679 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:22.679 [-t time in seconds] 00:11:22.679 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:22.679 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:22.679 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:22.679 [-l for compress/decompress workloads, name of uncompressed input file 00:11:22.679 [-S for crc32c workload, use this seed value (default 0) 00:11:22.679 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:22.679 [-f for fill workload, use this BYTE value (default 255) 00:11:22.679 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:22.679 [-y verify result if this switch is on] 00:11:22.679 [-a tasks to allocate per core (default: same value as -q)] 00:11:22.679 Can be used to spread operations across a wider range of memory. 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.679 00:11:22.679 real 0m0.069s 00:11:22.679 user 0m0.086s 00:11:22.679 sys 0m0.041s 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.679 ************************************ 00:11:22.679 END TEST accel_wrong_workload 00:11:22.679 ************************************ 00:11:22.679 11:22:57 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:11:22.679 11:22:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:22.679 11:22:57 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:11:22.679 11:22:57 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:11:22.680 11:22:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.680 11:22:57 accel -- common/autotest_common.sh@10 -- # set +x 00:11:22.680 ************************************ 00:11:22.680 START TEST accel_negative_buffers 00:11:22.680 ************************************ 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:11:22.680 11:22:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:11:22.680 11:22:57 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:11:22.680 11:22:57 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:22.680 11:22:57 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:22.680 11:22:57 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.680 11:22:57 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.680 11:22:57 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:22.680 11:22:57 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:11:22.680 11:22:57 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:11:22.680 -x option must be non-negative. 00:11:22.680 [2024-07-13 11:22:57.351939] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:11:22.680 accel_perf options: 00:11:22.680 [-h help message] 00:11:22.680 [-q queue depth per core] 00:11:22.680 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:22.680 [-T number of threads per core 00:11:22.680 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:22.680 [-t time in seconds] 00:11:22.680 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:22.680 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:22.680 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:22.680 [-l for compress/decompress workloads, name of uncompressed input file 00:11:22.680 [-S for crc32c workload, use this seed value (default 0) 00:11:22.680 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:22.680 [-f for fill workload, use this BYTE value (default 255) 00:11:22.680 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:22.680 [-y verify result if this switch is on] 00:11:22.680 [-a tasks to allocate per core (default: same value as -q)] 00:11:22.680 Can be used to spread operations across a wider range of memory. 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.680 00:11:22.680 real 0m0.064s 00:11:22.680 user 0m0.084s 00:11:22.680 sys 0m0.036s 00:11:22.680 ************************************ 00:11:22.680 END TEST accel_negative_buffers 00:11:22.680 ************************************ 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.680 11:22:57 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:11:22.680 11:22:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:22.680 11:22:57 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:11:22.680 11:22:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:22.680 11:22:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.680 11:22:57 accel -- common/autotest_common.sh@10 -- # set +x 00:11:22.939 ************************************ 00:11:22.939 START TEST accel_crc32c 00:11:22.939 ************************************ 00:11:22.939 11:22:57 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:22.939 11:22:57 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:22.939 [2024-07-13 11:22:57.467716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:22.939 [2024-07-13 11:22:57.467931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114858 ] 00:11:22.939 [2024-07-13 11:22:57.638114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.198 [2024-07-13 11:22:57.837877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:23.457 11:22:58 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:11:25.358 11:22:59 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:25.358 ************************************ 00:11:25.358 END TEST accel_crc32c 00:11:25.358 ************************************ 00:11:25.358 00:11:25.358 real 0m2.423s 00:11:25.358 user 0m2.127s 00:11:25.358 sys 0m0.218s 00:11:25.358 11:22:59 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.358 11:22:59 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:11:25.358 11:22:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:25.358 11:22:59 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:11:25.358 11:22:59 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:25.358 11:22:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.358 11:22:59 accel -- common/autotest_common.sh@10 -- # set +x 00:11:25.358 ************************************ 00:11:25.358 START TEST accel_crc32c_C2 00:11:25.358 ************************************ 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:11:25.358 11:22:59 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:11:25.358 [2024-07-13 11:22:59.943722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:25.358 [2024-07-13 11:22:59.944495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114909 ] 00:11:25.616 [2024-07-13 11:23:00.113897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.616 [2024-07-13 11:23:00.323738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.874 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:25.874 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.874 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.874 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.874 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:25.874 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:25.875 11:23:00 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:27.773 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:27.774 ************************************ 00:11:27.774 END TEST accel_crc32c_C2 00:11:27.774 ************************************ 00:11:27.774 00:11:27.774 real 0m2.436s 00:11:27.774 user 0m2.146s 00:11:27.774 sys 0m0.207s 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:27.774 11:23:02 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:11:27.774 11:23:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:27.774 11:23:02 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:11:27.774 11:23:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:27.774 11:23:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:27.774 11:23:02 accel -- common/autotest_common.sh@10 -- # set +x 00:11:27.774 ************************************ 00:11:27.774 START TEST accel_copy 00:11:27.774 ************************************ 00:11:27.774 11:23:02 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:11:27.774 11:23:02 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:11:27.774 [2024-07-13 11:23:02.434047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:27.774 [2024-07-13 11:23:02.434249] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114967 ] 00:11:28.031 [2024-07-13 11:23:02.599155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.289 [2024-07-13 11:23:02.802875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.289 11:23:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:11:30.191 11:23:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:30.191 ************************************ 00:11:30.191 END TEST accel_copy 00:11:30.191 00:11:30.191 real 0m2.409s 00:11:30.191 user 0m2.101s 00:11:30.191 sys 0m0.222s 00:11:30.191 11:23:04 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.191 11:23:04 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:11:30.191 ************************************ 00:11:30.191 11:23:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:30.191 11:23:04 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:30.191 11:23:04 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:11:30.191 11:23:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.191 11:23:04 accel -- common/autotest_common.sh@10 -- # set +x 00:11:30.191 ************************************ 00:11:30.191 START TEST accel_fill 00:11:30.191 ************************************ 00:11:30.191 11:23:04 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:11:30.191 11:23:04 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:11:30.191 [2024-07-13 11:23:04.895209] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:30.191 [2024-07-13 11:23:04.895442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115038 ] 00:11:30.450 [2024-07-13 11:23:05.065089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.709 [2024-07-13 11:23:05.273751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:30.968 11:23:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:11:32.871 11:23:07 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:32.871 ************************************ 00:11:32.871 END TEST accel_fill 00:11:32.871 ************************************ 00:11:32.871 00:11:32.871 real 0m2.444s 00:11:32.871 user 0m2.182s 00:11:32.871 sys 0m0.168s 00:11:32.871 11:23:07 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.871 11:23:07 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:11:32.871 11:23:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:32.871 11:23:07 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:11:32.871 11:23:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:32.871 11:23:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.871 11:23:07 accel -- common/autotest_common.sh@10 -- # set +x 00:11:32.871 ************************************ 00:11:32.871 START TEST accel_copy_crc32c 00:11:32.871 ************************************ 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:32.871 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:32.872 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:32.872 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:32.872 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:32.872 [2024-07-13 11:23:07.385136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:32.872 [2024-07-13 11:23:07.385336] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115094 ] 00:11:32.872 [2024-07-13 11:23:07.542209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.133 [2024-07-13 11:23:07.740475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.391 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:33.392 11:23:07 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:35.291 00:11:35.291 real 0m2.401s 00:11:35.291 user 0m2.109s 00:11:35.291 sys 0m0.220s 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:35.291 ************************************ 00:11:35.291 END TEST accel_copy_crc32c 00:11:35.291 ************************************ 00:11:35.291 11:23:09 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:11:35.291 11:23:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:35.291 11:23:09 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:35.291 11:23:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:35.291 11:23:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.291 11:23:09 accel -- common/autotest_common.sh@10 -- # set +x 00:11:35.291 ************************************ 00:11:35.291 START TEST accel_copy_crc32c_C2 00:11:35.291 ************************************ 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:11:35.291 11:23:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:11:35.291 [2024-07-13 11:23:09.854429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:35.291 [2024-07-13 11:23:09.854647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115152 ] 00:11:35.292 [2024-07-13 11:23:10.023272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.550 [2024-07-13 11:23:10.229189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:35.811 11:23:10 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:37.739 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:37.739 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:37.739 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:37.739 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:37.739 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:37.739 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:37.739 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:37.739 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:37.740 ************************************ 00:11:37.740 END TEST accel_copy_crc32c_C2 00:11:37.740 ************************************ 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:37.740 00:11:37.740 real 0m2.433s 00:11:37.740 user 0m2.086s 00:11:37.740 sys 0m0.269s 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.740 11:23:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:11:37.740 11:23:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:37.740 11:23:12 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:37.740 11:23:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:37.740 11:23:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.740 11:23:12 accel -- common/autotest_common.sh@10 -- # set +x 00:11:37.740 ************************************ 00:11:37.740 START TEST accel_dualcast 00:11:37.740 ************************************ 00:11:37.740 11:23:12 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:11:37.740 11:23:12 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:11:37.740 [2024-07-13 11:23:12.341896] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:37.740 [2024-07-13 11:23:12.342118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115203 ] 00:11:37.999 [2024-07-13 11:23:12.509763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.999 [2024-07-13 11:23:12.717318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:38.262 11:23:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:11:40.165 11:23:14 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:40.165 00:11:40.165 real 0m2.431s 00:11:40.165 user 0m2.152s 00:11:40.165 sys 0m0.206s 00:11:40.165 ************************************ 00:11:40.165 END TEST accel_dualcast 00:11:40.165 ************************************ 00:11:40.165 11:23:14 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:40.165 11:23:14 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:11:40.165 11:23:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:40.165 11:23:14 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:40.165 11:23:14 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:40.165 11:23:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.165 11:23:14 accel -- common/autotest_common.sh@10 -- # set +x 00:11:40.165 ************************************ 00:11:40.165 START TEST accel_compare 00:11:40.165 ************************************ 00:11:40.165 11:23:14 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:11:40.165 11:23:14 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:11:40.165 [2024-07-13 11:23:14.815227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:40.165 [2024-07-13 11:23:14.815429] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115281 ] 00:11:40.424 [2024-07-13 11:23:14.970019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.683 [2024-07-13 11:23:15.175118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:40.683 11:23:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:11:42.585 11:23:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:42.585 00:11:42.585 real 0m2.398s 00:11:42.585 user 0m2.128s 00:11:42.585 sys 0m0.192s 00:11:42.585 ************************************ 00:11:42.585 END TEST accel_compare 00:11:42.585 ************************************ 00:11:42.586 11:23:17 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.586 11:23:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:11:42.586 11:23:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:42.586 11:23:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:42.586 11:23:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:42.586 11:23:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.586 11:23:17 accel -- common/autotest_common.sh@10 -- # set +x 00:11:42.586 ************************************ 00:11:42.586 START TEST accel_xor 00:11:42.586 ************************************ 00:11:42.586 11:23:17 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:42.586 11:23:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:42.586 [2024-07-13 11:23:17.277034] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:42.586 [2024-07-13 11:23:17.277245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115332 ] 00:11:42.845 [2024-07-13 11:23:17.442756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.104 [2024-07-13 11:23:17.638772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:43.104 11:23:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.008 11:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.008 11:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:45.009 ************************************ 00:11:45.009 END TEST accel_xor 00:11:45.009 ************************************ 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:45.009 00:11:45.009 real 0m2.407s 00:11:45.009 user 0m2.118s 00:11:45.009 sys 0m0.220s 00:11:45.009 11:23:19 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.009 11:23:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:45.009 11:23:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:45.009 11:23:19 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:45.009 11:23:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:45.009 11:23:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.009 11:23:19 accel -- common/autotest_common.sh@10 -- # set +x 00:11:45.009 ************************************ 00:11:45.009 START TEST accel_xor 00:11:45.009 ************************************ 00:11:45.009 11:23:19 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:45.009 11:23:19 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:45.009 [2024-07-13 11:23:19.734057] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:45.009 [2024-07-13 11:23:19.734250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115388 ] 00:11:45.268 [2024-07-13 11:23:19.902295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.527 [2024-07-13 11:23:20.098531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:45.786 11:23:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:47.688 11:23:22 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:47.688 ************************************ 00:11:47.688 END TEST accel_xor 00:11:47.688 ************************************ 00:11:47.688 00:11:47.688 real 0m2.419s 00:11:47.688 user 0m2.143s 00:11:47.688 sys 0m0.205s 00:11:47.688 11:23:22 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.688 11:23:22 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:47.688 11:23:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:47.689 11:23:22 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:47.689 11:23:22 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:47.689 11:23:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.689 11:23:22 accel -- common/autotest_common.sh@10 -- # set +x 00:11:47.689 ************************************ 00:11:47.689 START TEST accel_dif_verify 00:11:47.689 ************************************ 00:11:47.689 11:23:22 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:47.689 11:23:22 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:11:47.689 [2024-07-13 11:23:22.204561] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:47.689 [2024-07-13 11:23:22.204790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115444 ] 00:11:47.689 [2024-07-13 11:23:22.370785] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.947 [2024-07-13 11:23:22.569531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:48.205 11:23:22 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:11:50.105 11:23:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:50.105 ************************************ 00:11:50.105 END TEST accel_dif_verify 00:11:50.105 ************************************ 00:11:50.105 00:11:50.105 real 0m2.428s 00:11:50.105 user 0m2.134s 00:11:50.105 sys 0m0.226s 00:11:50.105 11:23:24 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.105 11:23:24 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:11:50.105 11:23:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:50.105 11:23:24 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:50.105 11:23:24 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:50.105 11:23:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.105 11:23:24 accel -- common/autotest_common.sh@10 -- # set +x 00:11:50.105 ************************************ 00:11:50.105 START TEST accel_dif_generate 00:11:50.105 ************************************ 00:11:50.105 11:23:24 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:11:50.105 11:23:24 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:11:50.105 [2024-07-13 11:23:24.679453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:50.105 [2024-07-13 11:23:24.679673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115513 ] 00:11:50.364 [2024-07-13 11:23:24.849473] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.364 [2024-07-13 11:23:25.055407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.623 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:50.624 11:23:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:11:52.524 11:23:27 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:52.524 00:11:52.524 real 0m2.425s 00:11:52.524 user 0m2.128s 00:11:52.524 sys 0m0.226s 00:11:52.525 11:23:27 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:52.525 ************************************ 00:11:52.525 END TEST accel_dif_generate 00:11:52.525 ************************************ 00:11:52.525 11:23:27 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:11:52.525 11:23:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:52.525 11:23:27 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:52.525 11:23:27 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:11:52.525 11:23:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:52.525 11:23:27 accel -- common/autotest_common.sh@10 -- # set +x 00:11:52.525 ************************************ 00:11:52.525 START TEST accel_dif_generate_copy 00:11:52.525 ************************************ 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:11:52.525 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:11:52.525 [2024-07-13 11:23:27.162600] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:52.525 [2024-07-13 11:23:27.162811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115571 ] 00:11:52.784 [2024-07-13 11:23:27.333485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.044 [2024-07-13 11:23:27.554615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:53.044 11:23:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:54.949 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:54.950 ************************************ 00:11:54.950 END TEST accel_dif_generate_copy 00:11:54.950 ************************************ 00:11:54.950 00:11:54.950 real 0m2.435s 00:11:54.950 user 0m2.146s 00:11:54.950 sys 0m0.213s 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:54.950 11:23:29 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:11:54.950 11:23:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:54.950 11:23:29 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:11:54.950 11:23:29 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:54.950 11:23:29 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:11:54.950 11:23:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:54.950 11:23:29 accel -- common/autotest_common.sh@10 -- # set +x 00:11:54.950 ************************************ 00:11:54.950 START TEST accel_comp 00:11:54.950 ************************************ 00:11:54.950 11:23:29 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:11:54.950 11:23:29 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:11:54.950 [2024-07-13 11:23:29.650450] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:54.950 [2024-07-13 11:23:29.650660] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115622 ] 00:11:55.209 [2024-07-13 11:23:29.817293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.468 [2024-07-13 11:23:30.026780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.725 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:55.726 11:23:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:11:57.629 11:23:32 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:57.629 00:11:57.629 real 0m2.422s 00:11:57.629 user 0m2.157s 00:11:57.629 sys 0m0.196s 00:11:57.629 11:23:32 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:57.629 11:23:32 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:11:57.629 ************************************ 00:11:57.629 END TEST accel_comp 00:11:57.629 ************************************ 00:11:57.629 11:23:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:11:57.629 11:23:32 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:57.629 11:23:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:11:57.629 11:23:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.629 11:23:32 accel -- common/autotest_common.sh@10 -- # set +x 00:11:57.629 ************************************ 00:11:57.629 START TEST accel_decomp 00:11:57.629 ************************************ 00:11:57.629 11:23:32 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:11:57.629 11:23:32 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:11:57.629 [2024-07-13 11:23:32.120546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:57.629 [2024-07-13 11:23:32.121347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115686 ] 00:11:57.629 [2024-07-13 11:23:32.289909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.889 [2024-07-13 11:23:32.498685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:58.151 11:23:32 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:00.055 11:23:34 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:00.055 00:12:00.055 real 0m2.443s 00:12:00.055 user 0m2.154s 00:12:00.055 sys 0m0.211s 00:12:00.055 11:23:34 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.055 11:23:34 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:12:00.055 ************************************ 00:12:00.055 END TEST accel_decomp 00:12:00.055 ************************************ 00:12:00.055 11:23:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:00.055 11:23:34 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:00.055 11:23:34 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:00.055 11:23:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.055 11:23:34 accel -- common/autotest_common.sh@10 -- # set +x 00:12:00.055 ************************************ 00:12:00.055 START TEST accel_decomp_full 00:12:00.055 ************************************ 00:12:00.055 11:23:34 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:12:00.055 11:23:34 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:12:00.055 [2024-07-13 11:23:34.612269] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:00.055 [2024-07-13 11:23:34.612491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115757 ] 00:12:00.055 [2024-07-13 11:23:34.778108] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.314 [2024-07-13 11:23:34.974044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.573 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:00.574 11:23:35 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:00.574 11:23:35 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:00.574 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:00.574 11:23:35 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:12:02.477 11:23:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:12:02.477 11:23:37 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:02.477 11:23:37 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:02.477 11:23:37 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:02.477 ************************************ 00:12:02.477 END TEST accel_decomp_full 00:12:02.477 00:12:02.477 real 0m2.436s 00:12:02.477 user 0m2.179s 00:12:02.477 sys 0m0.181s 00:12:02.477 11:23:37 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.477 11:23:37 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:12:02.477 ************************************ 00:12:02.477 11:23:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:02.478 11:23:37 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:02.478 11:23:37 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:02.478 11:23:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.478 11:23:37 accel -- common/autotest_common.sh@10 -- # set +x 00:12:02.478 ************************************ 00:12:02.478 START TEST accel_decomp_mcore 00:12:02.478 ************************************ 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:02.478 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:02.478 [2024-07-13 11:23:37.114430] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:02.478 [2024-07-13 11:23:37.114734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115808 ] 00:12:02.737 [2024-07-13 11:23:37.311872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.995 [2024-07-13 11:23:37.521237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.995 [2024-07-13 11:23:37.521363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.995 [2024-07-13 11:23:37.521495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.995 [2024-07-13 11:23:37.521775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:03.254 11:23:37 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.232 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:05.233 ************************************ 00:12:05.233 END TEST accel_decomp_mcore 00:12:05.233 ************************************ 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:05.233 00:12:05.233 real 0m2.548s 00:12:05.233 user 0m7.374s 00:12:05.233 sys 0m0.254s 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:05.233 11:23:39 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:05.233 11:23:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:05.233 11:23:39 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:05.233 11:23:39 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:05.233 11:23:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:05.233 11:23:39 accel -- common/autotest_common.sh@10 -- # set +x 00:12:05.233 ************************************ 00:12:05.233 START TEST accel_decomp_full_mcore 00:12:05.233 ************************************ 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:12:05.233 11:23:39 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:12:05.233 [2024-07-13 11:23:39.700780] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:05.233 [2024-07-13 11:23:39.701587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115869 ] 00:12:05.233 [2024-07-13 11:23:39.889335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.490 [2024-07-13 11:23:40.114073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.490 [2024-07-13 11:23:40.114177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.490 [2024-07-13 11:23:40.114320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.490 [2024-07-13 11:23:40.114326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.748 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:05.749 11:23:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:07.648 00:12:07.648 real 0m2.566s 00:12:07.648 user 0m7.429s 00:12:07.648 sys 0m0.248s 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.648 11:23:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:12:07.648 ************************************ 00:12:07.648 END TEST accel_decomp_full_mcore 00:12:07.648 ************************************ 00:12:07.648 11:23:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:07.648 11:23:42 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:07.648 11:23:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:12:07.648 11:23:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.648 11:23:42 accel -- common/autotest_common.sh@10 -- # set +x 00:12:07.648 ************************************ 00:12:07.648 START TEST accel_decomp_mthread 00:12:07.648 ************************************ 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:07.648 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:07.648 [2024-07-13 11:23:42.305481] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:07.648 [2024-07-13 11:23:42.305669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115936 ] 00:12:07.907 [2024-07-13 11:23:42.461362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.166 [2024-07-13 11:23:42.660637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:08.166 11:23:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:10.067 ************************************ 00:12:10.067 END TEST accel_decomp_mthread 00:12:10.067 ************************************ 00:12:10.067 00:12:10.067 real 0m2.420s 00:12:10.067 user 0m2.150s 00:12:10.067 sys 0m0.194s 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:10.067 11:23:44 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:10.067 11:23:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:10.067 11:23:44 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:10.067 11:23:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:10.067 11:23:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:10.067 11:23:44 accel -- common/autotest_common.sh@10 -- # set +x 00:12:10.067 ************************************ 00:12:10.067 START TEST accel_decomp_full_mthread 00:12:10.067 ************************************ 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:12:10.067 11:23:44 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:12:10.067 [2024-07-13 11:23:44.784730] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:10.067 [2024-07-13 11:23:44.784946] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116006 ] 00:12:10.325 [2024-07-13 11:23:44.950971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.584 [2024-07-13 11:23:45.161956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:12:10.842 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:10.843 11:23:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:12:12.745 ************************************ 00:12:12.745 END TEST accel_decomp_full_mthread 00:12:12.745 ************************************ 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:12:12.745 00:12:12.745 real 0m2.474s 00:12:12.745 user 0m2.169s 00:12:12.745 sys 0m0.212s 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:12.745 11:23:47 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 11:23:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:12.745 11:23:47 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:12:12.745 11:23:47 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:12.745 11:23:47 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:12.745 11:23:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.745 11:23:47 accel -- common/autotest_common.sh@10 -- # set +x 00:12:12.745 11:23:47 accel -- accel/accel.sh@137 -- # build_accel_config 00:12:12.745 11:23:47 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:12:12.745 11:23:47 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:12:12.745 11:23:47 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:12:12.745 11:23:47 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:12:12.745 11:23:47 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:12:12.745 11:23:47 accel -- accel/accel.sh@40 -- # local IFS=, 00:12:12.745 11:23:47 accel -- accel/accel.sh@41 -- # jq -r . 00:12:12.745 ************************************ 00:12:12.745 START TEST accel_dif_functional_tests 00:12:12.745 ************************************ 00:12:12.745 11:23:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:12:12.745 [2024-07-13 11:23:47.352356] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:12.745 [2024-07-13 11:23:47.352777] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116057 ] 00:12:13.004 [2024-07-13 11:23:47.529762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:13.004 [2024-07-13 11:23:47.713062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.004 [2024-07-13 11:23:47.713196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.004 [2024-07-13 11:23:47.713195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:13.570 00:12:13.571 00:12:13.571 CUnit - A unit testing framework for C - Version 2.1-3 00:12:13.571 http://cunit.sourceforge.net/ 00:12:13.571 00:12:13.571 00:12:13.571 Suite: accel_dif 00:12:13.571 Test: verify: DIF generated, GUARD check ...passed 00:12:13.571 Test: verify: DIF generated, APPTAG check ...passed 00:12:13.571 Test: verify: DIF generated, REFTAG check ...passed 00:12:13.571 Test: verify: DIF not generated, GUARD check ...[2024-07-13 11:23:48.022864] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:13.571 passed 00:12:13.571 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 11:23:48.023344] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:13.571 passed 00:12:13.571 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 11:23:48.023682] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:13.571 passed 00:12:13.571 Test: verify: APPTAG correct, APPTAG check ...passed 00:12:13.571 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 11:23:48.024277] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:12:13.571 passed 00:12:13.571 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:12:13.571 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:12:13.571 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:12:13.571 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 11:23:48.025345] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:12:13.571 passed 00:12:13.571 Test: verify copy: DIF generated, GUARD check ...passed 00:12:13.571 Test: verify copy: DIF generated, APPTAG check ...passed 00:12:13.571 Test: verify copy: DIF generated, REFTAG check ...passed 00:12:13.571 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 11:23:48.026401] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:12:13.571 passed 00:12:13.571 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 11:23:48.026610] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:12:13.571 passed 00:12:13.571 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 11:23:48.026839] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:12:13.571 passed 00:12:13.571 Test: generate copy: DIF generated, GUARD check ...passed 00:12:13.571 Test: generate copy: DIF generated, APTTAG check ...passed 00:12:13.571 Test: generate copy: DIF generated, REFTAG check ...passed 00:12:13.571 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:12:13.571 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:12:13.571 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:12:13.571 Test: generate copy: iovecs-len validate ...[2024-07-13 11:23:48.028675] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:12:13.571 passed 00:12:13.571 Test: generate copy: buffer alignment validate ...passed 00:12:13.571 00:12:13.571 Run Summary: Type Total Ran Passed Failed Inactive 00:12:13.571 suites 1 1 n/a 0 0 00:12:13.571 tests 26 26 26 0 0 00:12:13.571 asserts 115 115 115 0 n/a 00:12:13.571 00:12:13.571 Elapsed time = 0.019 seconds 00:12:14.507 ************************************ 00:12:14.507 END TEST accel_dif_functional_tests 00:12:14.507 ************************************ 00:12:14.507 00:12:14.507 real 0m1.805s 00:12:14.507 user 0m3.406s 00:12:14.507 sys 0m0.318s 00:12:14.507 11:23:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.507 11:23:49 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:12:14.507 11:23:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:12:14.507 ************************************ 00:12:14.507 END TEST accel 00:12:14.507 ************************************ 00:12:14.507 00:12:14.507 real 0m58.111s 00:12:14.507 user 1m2.911s 00:12:14.507 sys 0m6.147s 00:12:14.507 11:23:49 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.507 11:23:49 accel -- common/autotest_common.sh@10 -- # set +x 00:12:14.507 11:23:49 -- common/autotest_common.sh@1142 -- # return 0 00:12:14.507 11:23:49 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:14.507 11:23:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:14.507 11:23:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.507 11:23:49 -- common/autotest_common.sh@10 -- # set +x 00:12:14.507 ************************************ 00:12:14.507 START TEST accel_rpc 00:12:14.507 ************************************ 00:12:14.507 11:23:49 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:12:14.507 * Looking for test storage... 00:12:14.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:12:14.507 11:23:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:14.507 11:23:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=116151 00:12:14.507 11:23:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 116151 00:12:14.507 11:23:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:12:14.507 11:23:49 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 116151 ']' 00:12:14.507 11:23:49 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.507 11:23:49 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:14.507 11:23:49 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.507 11:23:49 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:14.507 11:23:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.766 [2024-07-13 11:23:49.318962] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:14.766 [2024-07-13 11:23:49.319433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116151 ] 00:12:14.766 [2024-07-13 11:23:49.490113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.024 [2024-07-13 11:23:49.676594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.591 11:23:50 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:15.591 11:23:50 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:15.591 11:23:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:12:15.591 11:23:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:12:15.591 11:23:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:12:15.591 11:23:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:12:15.591 11:23:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:12:15.591 11:23:50 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:15.591 11:23:50 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:15.591 11:23:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.591 ************************************ 00:12:15.591 START TEST accel_assign_opcode 00:12:15.591 ************************************ 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:15.591 [2024-07-13 11:23:50.265717] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:15.591 [2024-07-13 11:23:50.273686] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.591 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:16.528 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.528 11:23:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:12:16.528 11:23:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:12:16.528 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.528 11:23:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:16.528 11:23:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:12:16.528 11:23:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.528 software 00:12:16.528 ************************************ 00:12:16.528 END TEST accel_assign_opcode 00:12:16.528 ************************************ 00:12:16.528 00:12:16.528 real 0m0.787s 00:12:16.528 user 0m0.057s 00:12:16.528 sys 0m0.011s 00:12:16.528 11:23:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:16.528 11:23:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:12:16.528 11:23:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 116151 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 116151 ']' 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 116151 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116151 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116151' 00:12:16.528 killing process with pid 116151 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@967 -- # kill 116151 00:12:16.528 11:23:51 accel_rpc -- common/autotest_common.sh@972 -- # wait 116151 00:12:18.430 ************************************ 00:12:18.430 END TEST accel_rpc 00:12:18.430 ************************************ 00:12:18.430 00:12:18.430 real 0m3.911s 00:12:18.430 user 0m3.836s 00:12:18.430 sys 0m0.608s 00:12:18.430 11:23:53 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.430 11:23:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:18.430 11:23:53 -- common/autotest_common.sh@1142 -- # return 0 00:12:18.430 11:23:53 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:18.430 11:23:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:18.430 11:23:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.430 11:23:53 -- common/autotest_common.sh@10 -- # set +x 00:12:18.430 ************************************ 00:12:18.430 START TEST app_cmdline 00:12:18.430 ************************************ 00:12:18.430 11:23:53 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:12:18.688 * Looking for test storage... 00:12:18.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:18.688 11:23:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:12:18.688 11:23:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=116278 00:12:18.688 11:23:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:12:18.688 11:23:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 116278 00:12:18.688 11:23:53 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 116278 ']' 00:12:18.688 11:23:53 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.688 11:23:53 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.688 11:23:53 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.688 11:23:53 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.688 11:23:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:18.688 [2024-07-13 11:23:53.279437] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:18.688 [2024-07-13 11:23:53.279936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116278 ] 00:12:18.947 [2024-07-13 11:23:53.447998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.947 [2024-07-13 11:23:53.638089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:12:19.902 { 00:12:19.902 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:12:19.902 "fields": { 00:12:19.902 "major": 24, 00:12:19.902 "minor": 9, 00:12:19.902 "patch": 0, 00:12:19.902 "suffix": "-pre", 00:12:19.902 "commit": "719d03c6a" 00:12:19.902 } 00:12:19.902 } 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@26 -- # sort 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:12:19.902 11:23:54 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:19.902 11:23:54 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:12:20.469 request: 00:12:20.469 { 00:12:20.469 "method": "env_dpdk_get_mem_stats", 00:12:20.469 "req_id": 1 00:12:20.469 } 00:12:20.469 Got JSON-RPC error response 00:12:20.469 response: 00:12:20.469 { 00:12:20.469 "code": -32601, 00:12:20.469 "message": "Method not found" 00:12:20.469 } 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:20.469 11:23:54 app_cmdline -- app/cmdline.sh@1 -- # killprocess 116278 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 116278 ']' 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 116278 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116278 00:12:20.469 killing process with pid 116278 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116278' 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@967 -- # kill 116278 00:12:20.469 11:23:54 app_cmdline -- common/autotest_common.sh@972 -- # wait 116278 00:12:22.372 00:12:22.372 real 0m3.801s 00:12:22.372 user 0m4.099s 00:12:22.372 sys 0m0.607s 00:12:22.372 11:23:56 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.372 11:23:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:12:22.372 ************************************ 00:12:22.372 END TEST app_cmdline 00:12:22.372 ************************************ 00:12:22.372 11:23:56 -- common/autotest_common.sh@1142 -- # return 0 00:12:22.372 11:23:56 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:22.373 11:23:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:22.373 11:23:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.373 11:23:56 -- common/autotest_common.sh@10 -- # set +x 00:12:22.373 ************************************ 00:12:22.373 START TEST version 00:12:22.373 ************************************ 00:12:22.373 11:23:56 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:12:22.373 * Looking for test storage... 00:12:22.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:12:22.373 11:23:57 version -- app/version.sh@17 -- # get_header_version major 00:12:22.373 11:23:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.373 11:23:57 version -- app/version.sh@14 -- # cut -f2 00:12:22.373 11:23:57 version -- app/version.sh@14 -- # tr -d '"' 00:12:22.373 11:23:57 version -- app/version.sh@17 -- # major=24 00:12:22.373 11:23:57 version -- app/version.sh@18 -- # get_header_version minor 00:12:22.373 11:23:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.373 11:23:57 version -- app/version.sh@14 -- # cut -f2 00:12:22.373 11:23:57 version -- app/version.sh@14 -- # tr -d '"' 00:12:22.373 11:23:57 version -- app/version.sh@18 -- # minor=9 00:12:22.373 11:23:57 version -- app/version.sh@19 -- # get_header_version patch 00:12:22.373 11:23:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.373 11:23:57 version -- app/version.sh@14 -- # cut -f2 00:12:22.373 11:23:57 version -- app/version.sh@14 -- # tr -d '"' 00:12:22.373 11:23:57 version -- app/version.sh@19 -- # patch=0 00:12:22.373 11:23:57 version -- app/version.sh@20 -- # get_header_version suffix 00:12:22.373 11:23:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:12:22.373 11:23:57 version -- app/version.sh@14 -- # cut -f2 00:12:22.373 11:23:57 version -- app/version.sh@14 -- # tr -d '"' 00:12:22.373 11:23:57 version -- app/version.sh@20 -- # suffix=-pre 00:12:22.373 11:23:57 version -- app/version.sh@22 -- # version=24.9 00:12:22.373 11:23:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:12:22.373 11:23:57 version -- app/version.sh@28 -- # version=24.9rc0 00:12:22.373 11:23:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:22.373 11:23:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:12:22.632 11:23:57 version -- app/version.sh@30 -- # py_version=24.9rc0 00:12:22.632 11:23:57 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:12:22.632 ************************************ 00:12:22.632 END TEST version 00:12:22.632 ************************************ 00:12:22.632 00:12:22.632 real 0m0.155s 00:12:22.632 user 0m0.136s 00:12:22.632 sys 0m0.045s 00:12:22.632 11:23:57 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:22.632 11:23:57 version -- common/autotest_common.sh@10 -- # set +x 00:12:22.632 11:23:57 -- common/autotest_common.sh@1142 -- # return 0 00:12:22.632 11:23:57 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:12:22.632 11:23:57 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:22.632 11:23:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:22.632 11:23:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.632 11:23:57 -- common/autotest_common.sh@10 -- # set +x 00:12:22.632 ************************************ 00:12:22.632 START TEST blockdev_general 00:12:22.632 ************************************ 00:12:22.632 11:23:57 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:12:22.632 * Looking for test storage... 00:12:22.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:22.632 11:23:57 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=116479 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:22.632 11:23:57 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 116479 00:12:22.632 11:23:57 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 116479 ']' 00:12:22.632 11:23:57 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.632 11:23:57 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:22.632 11:23:57 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.632 11:23:57 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:22.632 11:23:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:22.632 [2024-07-13 11:23:57.336238] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:22.632 [2024-07-13 11:23:57.337309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116479 ] 00:12:22.891 [2024-07-13 11:23:57.505317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.149 [2024-07-13 11:23:57.699727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.717 11:23:58 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:23.717 11:23:58 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:12:23.717 11:23:58 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:12:23.717 11:23:58 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:12:23.717 11:23:58 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:12:23.717 11:23:58 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.717 11:23:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:24.654 [2024-07-13 11:23:59.054100] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.654 [2024-07-13 11:23:59.054460] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:24.654 00:12:24.654 [2024-07-13 11:23:59.062054] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.654 [2024-07-13 11:23:59.062237] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:24.654 00:12:24.654 Malloc0 00:12:24.654 Malloc1 00:12:24.654 Malloc2 00:12:24.654 Malloc3 00:12:24.654 Malloc4 00:12:24.654 Malloc5 00:12:24.654 Malloc6 00:12:24.654 Malloc7 00:12:24.913 Malloc8 00:12:24.913 Malloc9 00:12:24.913 [2024-07-13 11:23:59.473932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:24.913 [2024-07-13 11:23:59.474172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.913 [2024-07-13 11:23:59.474247] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:12:24.913 [2024-07-13 11:23:59.474523] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.913 [2024-07-13 11:23:59.476941] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.913 [2024-07-13 11:23:59.477083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:24.913 TestPT 00:12:24.913 11:23:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.913 11:23:59 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:12:24.913 5000+0 records in 00:12:24.913 5000+0 records out 00:12:24.913 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0276043 s, 371 MB/s 00:12:24.914 11:23:59 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:24.914 AIO0 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.914 11:23:59 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.914 11:23:59 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:12:24.914 11:23:59 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.914 11:23:59 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.914 11:23:59 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.914 11:23:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:25.176 11:23:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.176 11:23:59 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:12:25.176 11:23:59 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:12:25.176 11:23:59 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.176 11:23:59 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:12:25.176 11:23:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:25.176 11:23:59 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.176 11:23:59 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:12:25.176 11:23:59 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:12:25.177 11:23:59 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "02ba153f-f20a-4260-a28b-759c401c3441"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "02ba153f-f20a-4260-a28b-759c401c3441",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "08bcc0cb-ad5b-5407-99bd-07e010240fe9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "08bcc0cb-ad5b-5407-99bd-07e010240fe9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2fd3dbdf-a937-55ff-a1d7-c9dd4e02f59c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2fd3dbdf-a937-55ff-a1d7-c9dd4e02f59c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b20cc092-197b-5b81-99b4-84fbd5c70353"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b20cc092-197b-5b81-99b4-84fbd5c70353",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7044642c-4a34-541a-96a0-cecadfd45c34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7044642c-4a34-541a-96a0-cecadfd45c34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "d04e9be6-dcad-595d-88cd-edd049854f77"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d04e9be6-dcad-595d-88cd-edd049854f77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "9d5b36b8-ce69-5974-bb89-e5274a530ba7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9d5b36b8-ce69-5974-bb89-e5274a530ba7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e0e5dd9a-ff68-5a3a-b5f3-e1348be685c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e0e5dd9a-ff68-5a3a-b5f3-e1348be685c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e264b714-16d4-5707-b59f-c7ec963fc613"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e264b714-16d4-5707-b59f-c7ec963fc613",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0d58cf2f-57a8-5f68-ac03-29d6a960cc6d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0d58cf2f-57a8-5f68-ac03-29d6a960cc6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "46d305f3-03ea-5a17-9d70-04932d1b7a11"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "46d305f3-03ea-5a17-9d70-04932d1b7a11",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "38d83a35-7eeb-5e4a-a379-3b7ce11354d1"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "38d83a35-7eeb-5e4a-a379-3b7ce11354d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5f84f301-6bb2-4171-82da-8344fd3f8edb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5f84f301-6bb2-4171-82da-8344fd3f8edb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5f84f301-6bb2-4171-82da-8344fd3f8edb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c93751ee-c454-47e7-99ab-55c8e770ab6e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "22578e49-5a60-4732-b1e4-25b1c5e3cc7f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "3830764e-5ed0-41b7-af79-26fed7cfb289"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3830764e-5ed0-41b7-af79-26fed7cfb289",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3830764e-5ed0-41b7-af79-26fed7cfb289",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "0c32c6fa-c81e-47cf-8bb4-daa13a9b8fe6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "064d1ce9-9b3e-4f14-8a66-8f4b3ee7ec81",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b85c66e7-7491-44ca-ba7d-1a26c749063f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b85c66e7-7491-44ca-ba7d-1a26c749063f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b85c66e7-7491-44ca-ba7d-1a26c749063f",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c8502de6-6ed0-4a41-8b75-ba16131f4788",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "f15ed462-f8da-46ba-8b87-3d71f7dd8b01",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "bb1c9cee-c65d-48f4-a88a-a7ff35549595"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "bb1c9cee-c65d-48f4-a88a-a7ff35549595",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:25.177 11:23:59 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:12:25.177 11:23:59 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:12:25.177 11:23:59 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:12:25.177 11:23:59 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 116479 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 116479 ']' 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 116479 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116479 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116479' 00:12:25.177 killing process with pid 116479 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@967 -- # kill 116479 00:12:25.177 11:23:59 blockdev_general -- common/autotest_common.sh@972 -- # wait 116479 00:12:28.485 11:24:02 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:28.485 11:24:02 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:28.485 11:24:02 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:12:28.485 11:24:02 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.485 11:24:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:28.485 ************************************ 00:12:28.485 START TEST bdev_hello_world 00:12:28.485 ************************************ 00:12:28.485 11:24:02 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:12:28.485 [2024-07-13 11:24:02.636684] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:28.485 [2024-07-13 11:24:02.637840] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116564 ] 00:12:28.485 [2024-07-13 11:24:02.818645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.485 [2024-07-13 11:24:02.999974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.743 [2024-07-13 11:24:03.360232] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:28.743 [2024-07-13 11:24:03.360660] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:28.743 [2024-07-13 11:24:03.368146] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:28.743 [2024-07-13 11:24:03.368332] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:28.743 [2024-07-13 11:24:03.376215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:28.743 [2024-07-13 11:24:03.376417] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:28.743 [2024-07-13 11:24:03.376592] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:29.002 [2024-07-13 11:24:03.571623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:29.002 [2024-07-13 11:24:03.572032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.002 [2024-07-13 11:24:03.572118] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:29.002 [2024-07-13 11:24:03.572409] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.002 [2024-07-13 11:24:03.574747] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.002 [2024-07-13 11:24:03.574944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:29.261 [2024-07-13 11:24:03.883622] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:29.261 [2024-07-13 11:24:03.883911] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:12:29.261 [2024-07-13 11:24:03.884042] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:29.261 [2024-07-13 11:24:03.884247] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:29.261 [2024-07-13 11:24:03.884527] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:29.261 [2024-07-13 11:24:03.884731] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:29.261 [2024-07-13 11:24:03.884968] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:29.261 00:12:29.261 [2024-07-13 11:24:03.885199] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:31.162 ************************************ 00:12:31.162 END TEST bdev_hello_world 00:12:31.162 ************************************ 00:12:31.162 00:12:31.162 real 0m3.079s 00:12:31.162 user 0m2.450s 00:12:31.162 sys 0m0.461s 00:12:31.162 11:24:05 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:31.162 11:24:05 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 11:24:05 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:31.162 11:24:05 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:12:31.162 11:24:05 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:31.162 11:24:05 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:31.162 11:24:05 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 ************************************ 00:12:31.162 START TEST bdev_bounds 00:12:31.162 ************************************ 00:12:31.162 Process bdevio pid: 116651 00:12:31.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=116651 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 116651' 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 116651 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 116651 ']' 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.162 11:24:05 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:31.162 [2024-07-13 11:24:05.770741] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:31.162 [2024-07-13 11:24:05.771264] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116651 ] 00:12:31.421 [2024-07-13 11:24:05.950445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:31.421 [2024-07-13 11:24:06.146015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.421 [2024-07-13 11:24:06.146145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.421 [2024-07-13 11:24:06.146202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.988 [2024-07-13 11:24:06.524683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:31.988 [2024-07-13 11:24:06.525097] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:31.988 [2024-07-13 11:24:06.532619] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:31.988 [2024-07-13 11:24:06.532828] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:31.988 [2024-07-13 11:24:06.540667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:31.988 [2024-07-13 11:24:06.540876] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:31.988 [2024-07-13 11:24:06.541036] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:32.248 [2024-07-13 11:24:06.734056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:32.248 [2024-07-13 11:24:06.734462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.248 [2024-07-13 11:24:06.734554] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:32.248 [2024-07-13 11:24:06.734897] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.248 [2024-07-13 11:24:06.737705] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.248 [2024-07-13 11:24:06.737889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:32.507 11:24:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.507 11:24:07 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:12:32.507 11:24:07 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:32.507 I/O targets: 00:12:32.507 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:12:32.507 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:12:32.507 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:12:32.507 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:12:32.507 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:12:32.507 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:12:32.507 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:12:32.507 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:12:32.507 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:12:32.507 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:12:32.507 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:12:32.507 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:12:32.507 raid0: 131072 blocks of 512 bytes (64 MiB) 00:12:32.507 concat0: 131072 blocks of 512 bytes (64 MiB) 00:12:32.507 raid1: 65536 blocks of 512 bytes (32 MiB) 00:12:32.507 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:12:32.507 00:12:32.507 00:12:32.507 CUnit - A unit testing framework for C - Version 2.1-3 00:12:32.507 http://cunit.sourceforge.net/ 00:12:32.507 00:12:32.507 00:12:32.507 Suite: bdevio tests on: AIO0 00:12:32.507 Test: blockdev write read block ...passed 00:12:32.507 Test: blockdev write zeroes read block ...passed 00:12:32.507 Test: blockdev write zeroes read no split ...passed 00:12:32.507 Test: blockdev write zeroes read split ...passed 00:12:32.507 Test: blockdev write zeroes read split partial ...passed 00:12:32.507 Test: blockdev reset ...passed 00:12:32.507 Test: blockdev write read 8 blocks ...passed 00:12:32.507 Test: blockdev write read size > 128k ...passed 00:12:32.507 Test: blockdev write read invalid size ...passed 00:12:32.507 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:32.507 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:32.507 Test: blockdev write read max offset ...passed 00:12:32.507 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:32.507 Test: blockdev writev readv 8 blocks ...passed 00:12:32.507 Test: blockdev writev readv 30 x 1block ...passed 00:12:32.507 Test: blockdev writev readv block ...passed 00:12:32.507 Test: blockdev writev readv size > 128k ...passed 00:12:32.507 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:32.507 Test: blockdev comparev and writev ...passed 00:12:32.507 Test: blockdev nvme passthru rw ...passed 00:12:32.508 Test: blockdev nvme passthru vendor specific ...passed 00:12:32.508 Test: blockdev nvme admin passthru ...passed 00:12:32.508 Test: blockdev copy ...passed 00:12:32.508 Suite: bdevio tests on: raid1 00:12:32.508 Test: blockdev write read block ...passed 00:12:32.508 Test: blockdev write zeroes read block ...passed 00:12:32.508 Test: blockdev write zeroes read no split ...passed 00:12:32.766 Test: blockdev write zeroes read split ...passed 00:12:32.766 Test: blockdev write zeroes read split partial ...passed 00:12:32.766 Test: blockdev reset ...passed 00:12:32.766 Test: blockdev write read 8 blocks ...passed 00:12:32.766 Test: blockdev write read size > 128k ...passed 00:12:32.766 Test: blockdev write read invalid size ...passed 00:12:32.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:32.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:32.766 Test: blockdev write read max offset ...passed 00:12:32.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:32.766 Test: blockdev writev readv 8 blocks ...passed 00:12:32.766 Test: blockdev writev readv 30 x 1block ...passed 00:12:32.766 Test: blockdev writev readv block ...passed 00:12:32.766 Test: blockdev writev readv size > 128k ...passed 00:12:32.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:32.766 Test: blockdev comparev and writev ...passed 00:12:32.766 Test: blockdev nvme passthru rw ...passed 00:12:32.766 Test: blockdev nvme passthru vendor specific ...passed 00:12:32.766 Test: blockdev nvme admin passthru ...passed 00:12:32.766 Test: blockdev copy ...passed 00:12:32.766 Suite: bdevio tests on: concat0 00:12:32.766 Test: blockdev write read block ...passed 00:12:32.766 Test: blockdev write zeroes read block ...passed 00:12:32.766 Test: blockdev write zeroes read no split ...passed 00:12:32.766 Test: blockdev write zeroes read split ...passed 00:12:32.766 Test: blockdev write zeroes read split partial ...passed 00:12:32.766 Test: blockdev reset ...passed 00:12:32.766 Test: blockdev write read 8 blocks ...passed 00:12:32.766 Test: blockdev write read size > 128k ...passed 00:12:32.766 Test: blockdev write read invalid size ...passed 00:12:32.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:32.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:32.766 Test: blockdev write read max offset ...passed 00:12:32.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:32.766 Test: blockdev writev readv 8 blocks ...passed 00:12:32.766 Test: blockdev writev readv 30 x 1block ...passed 00:12:32.766 Test: blockdev writev readv block ...passed 00:12:32.766 Test: blockdev writev readv size > 128k ...passed 00:12:32.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:32.766 Test: blockdev comparev and writev ...passed 00:12:32.766 Test: blockdev nvme passthru rw ...passed 00:12:32.766 Test: blockdev nvme passthru vendor specific ...passed 00:12:32.766 Test: blockdev nvme admin passthru ...passed 00:12:32.766 Test: blockdev copy ...passed 00:12:32.766 Suite: bdevio tests on: raid0 00:12:32.766 Test: blockdev write read block ...passed 00:12:32.766 Test: blockdev write zeroes read block ...passed 00:12:32.766 Test: blockdev write zeroes read no split ...passed 00:12:32.766 Test: blockdev write zeroes read split ...passed 00:12:32.766 Test: blockdev write zeroes read split partial ...passed 00:12:32.766 Test: blockdev reset ...passed 00:12:32.766 Test: blockdev write read 8 blocks ...passed 00:12:32.766 Test: blockdev write read size > 128k ...passed 00:12:32.766 Test: blockdev write read invalid size ...passed 00:12:32.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:32.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:32.766 Test: blockdev write read max offset ...passed 00:12:32.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:32.766 Test: blockdev writev readv 8 blocks ...passed 00:12:32.766 Test: blockdev writev readv 30 x 1block ...passed 00:12:32.766 Test: blockdev writev readv block ...passed 00:12:32.766 Test: blockdev writev readv size > 128k ...passed 00:12:32.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:32.766 Test: blockdev comparev and writev ...passed 00:12:32.766 Test: blockdev nvme passthru rw ...passed 00:12:32.766 Test: blockdev nvme passthru vendor specific ...passed 00:12:32.766 Test: blockdev nvme admin passthru ...passed 00:12:32.766 Test: blockdev copy ...passed 00:12:32.766 Suite: bdevio tests on: TestPT 00:12:32.766 Test: blockdev write read block ...passed 00:12:32.766 Test: blockdev write zeroes read block ...passed 00:12:32.766 Test: blockdev write zeroes read no split ...passed 00:12:32.766 Test: blockdev write zeroes read split ...passed 00:12:32.766 Test: blockdev write zeroes read split partial ...passed 00:12:32.766 Test: blockdev reset ...passed 00:12:32.766 Test: blockdev write read 8 blocks ...passed 00:12:32.766 Test: blockdev write read size > 128k ...passed 00:12:32.766 Test: blockdev write read invalid size ...passed 00:12:32.766 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:32.766 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:32.766 Test: blockdev write read max offset ...passed 00:12:32.766 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:32.766 Test: blockdev writev readv 8 blocks ...passed 00:12:32.766 Test: blockdev writev readv 30 x 1block ...passed 00:12:32.766 Test: blockdev writev readv block ...passed 00:12:32.766 Test: blockdev writev readv size > 128k ...passed 00:12:32.766 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:32.766 Test: blockdev comparev and writev ...passed 00:12:32.766 Test: blockdev nvme passthru rw ...passed 00:12:32.766 Test: blockdev nvme passthru vendor specific ...passed 00:12:32.766 Test: blockdev nvme admin passthru ...passed 00:12:32.766 Test: blockdev copy ...passed 00:12:32.766 Suite: bdevio tests on: Malloc2p7 00:12:32.766 Test: blockdev write read block ...passed 00:12:32.766 Test: blockdev write zeroes read block ...passed 00:12:32.766 Test: blockdev write zeroes read no split ...passed 00:12:33.025 Test: blockdev write zeroes read split ...passed 00:12:33.025 Test: blockdev write zeroes read split partial ...passed 00:12:33.025 Test: blockdev reset ...passed 00:12:33.025 Test: blockdev write read 8 blocks ...passed 00:12:33.025 Test: blockdev write read size > 128k ...passed 00:12:33.025 Test: blockdev write read invalid size ...passed 00:12:33.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.025 Test: blockdev write read max offset ...passed 00:12:33.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.025 Test: blockdev writev readv 8 blocks ...passed 00:12:33.025 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.025 Test: blockdev writev readv block ...passed 00:12:33.025 Test: blockdev writev readv size > 128k ...passed 00:12:33.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.025 Test: blockdev comparev and writev ...passed 00:12:33.025 Test: blockdev nvme passthru rw ...passed 00:12:33.025 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.025 Test: blockdev nvme admin passthru ...passed 00:12:33.025 Test: blockdev copy ...passed 00:12:33.025 Suite: bdevio tests on: Malloc2p6 00:12:33.025 Test: blockdev write read block ...passed 00:12:33.025 Test: blockdev write zeroes read block ...passed 00:12:33.025 Test: blockdev write zeroes read no split ...passed 00:12:33.025 Test: blockdev write zeroes read split ...passed 00:12:33.025 Test: blockdev write zeroes read split partial ...passed 00:12:33.025 Test: blockdev reset ...passed 00:12:33.025 Test: blockdev write read 8 blocks ...passed 00:12:33.025 Test: blockdev write read size > 128k ...passed 00:12:33.025 Test: blockdev write read invalid size ...passed 00:12:33.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.025 Test: blockdev write read max offset ...passed 00:12:33.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.025 Test: blockdev writev readv 8 blocks ...passed 00:12:33.025 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.025 Test: blockdev writev readv block ...passed 00:12:33.025 Test: blockdev writev readv size > 128k ...passed 00:12:33.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.025 Test: blockdev comparev and writev ...passed 00:12:33.025 Test: blockdev nvme passthru rw ...passed 00:12:33.025 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.025 Test: blockdev nvme admin passthru ...passed 00:12:33.025 Test: blockdev copy ...passed 00:12:33.025 Suite: bdevio tests on: Malloc2p5 00:12:33.025 Test: blockdev write read block ...passed 00:12:33.025 Test: blockdev write zeroes read block ...passed 00:12:33.025 Test: blockdev write zeroes read no split ...passed 00:12:33.025 Test: blockdev write zeroes read split ...passed 00:12:33.025 Test: blockdev write zeroes read split partial ...passed 00:12:33.025 Test: blockdev reset ...passed 00:12:33.025 Test: blockdev write read 8 blocks ...passed 00:12:33.025 Test: blockdev write read size > 128k ...passed 00:12:33.025 Test: blockdev write read invalid size ...passed 00:12:33.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.025 Test: blockdev write read max offset ...passed 00:12:33.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.025 Test: blockdev writev readv 8 blocks ...passed 00:12:33.025 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.025 Test: blockdev writev readv block ...passed 00:12:33.025 Test: blockdev writev readv size > 128k ...passed 00:12:33.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.025 Test: blockdev comparev and writev ...passed 00:12:33.025 Test: blockdev nvme passthru rw ...passed 00:12:33.025 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.025 Test: blockdev nvme admin passthru ...passed 00:12:33.025 Test: blockdev copy ...passed 00:12:33.025 Suite: bdevio tests on: Malloc2p4 00:12:33.025 Test: blockdev write read block ...passed 00:12:33.025 Test: blockdev write zeroes read block ...passed 00:12:33.025 Test: blockdev write zeroes read no split ...passed 00:12:33.025 Test: blockdev write zeroes read split ...passed 00:12:33.025 Test: blockdev write zeroes read split partial ...passed 00:12:33.025 Test: blockdev reset ...passed 00:12:33.025 Test: blockdev write read 8 blocks ...passed 00:12:33.025 Test: blockdev write read size > 128k ...passed 00:12:33.025 Test: blockdev write read invalid size ...passed 00:12:33.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.025 Test: blockdev write read max offset ...passed 00:12:33.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.025 Test: blockdev writev readv 8 blocks ...passed 00:12:33.025 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.025 Test: blockdev writev readv block ...passed 00:12:33.025 Test: blockdev writev readv size > 128k ...passed 00:12:33.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.025 Test: blockdev comparev and writev ...passed 00:12:33.025 Test: blockdev nvme passthru rw ...passed 00:12:33.025 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.025 Test: blockdev nvme admin passthru ...passed 00:12:33.025 Test: blockdev copy ...passed 00:12:33.025 Suite: bdevio tests on: Malloc2p3 00:12:33.025 Test: blockdev write read block ...passed 00:12:33.025 Test: blockdev write zeroes read block ...passed 00:12:33.025 Test: blockdev write zeroes read no split ...passed 00:12:33.025 Test: blockdev write zeroes read split ...passed 00:12:33.025 Test: blockdev write zeroes read split partial ...passed 00:12:33.025 Test: blockdev reset ...passed 00:12:33.025 Test: blockdev write read 8 blocks ...passed 00:12:33.025 Test: blockdev write read size > 128k ...passed 00:12:33.025 Test: blockdev write read invalid size ...passed 00:12:33.025 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.025 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.025 Test: blockdev write read max offset ...passed 00:12:33.025 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.025 Test: blockdev writev readv 8 blocks ...passed 00:12:33.025 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.025 Test: blockdev writev readv block ...passed 00:12:33.025 Test: blockdev writev readv size > 128k ...passed 00:12:33.025 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.025 Test: blockdev comparev and writev ...passed 00:12:33.025 Test: blockdev nvme passthru rw ...passed 00:12:33.025 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.025 Test: blockdev nvme admin passthru ...passed 00:12:33.025 Test: blockdev copy ...passed 00:12:33.025 Suite: bdevio tests on: Malloc2p2 00:12:33.025 Test: blockdev write read block ...passed 00:12:33.025 Test: blockdev write zeroes read block ...passed 00:12:33.025 Test: blockdev write zeroes read no split ...passed 00:12:33.025 Test: blockdev write zeroes read split ...passed 00:12:33.284 Test: blockdev write zeroes read split partial ...passed 00:12:33.284 Test: blockdev reset ...passed 00:12:33.284 Test: blockdev write read 8 blocks ...passed 00:12:33.284 Test: blockdev write read size > 128k ...passed 00:12:33.284 Test: blockdev write read invalid size ...passed 00:12:33.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.284 Test: blockdev write read max offset ...passed 00:12:33.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.284 Test: blockdev writev readv 8 blocks ...passed 00:12:33.284 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.284 Test: blockdev writev readv block ...passed 00:12:33.284 Test: blockdev writev readv size > 128k ...passed 00:12:33.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.284 Test: blockdev comparev and writev ...passed 00:12:33.284 Test: blockdev nvme passthru rw ...passed 00:12:33.284 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.284 Test: blockdev nvme admin passthru ...passed 00:12:33.284 Test: blockdev copy ...passed 00:12:33.284 Suite: bdevio tests on: Malloc2p1 00:12:33.284 Test: blockdev write read block ...passed 00:12:33.284 Test: blockdev write zeroes read block ...passed 00:12:33.284 Test: blockdev write zeroes read no split ...passed 00:12:33.284 Test: blockdev write zeroes read split ...passed 00:12:33.284 Test: blockdev write zeroes read split partial ...passed 00:12:33.284 Test: blockdev reset ...passed 00:12:33.284 Test: blockdev write read 8 blocks ...passed 00:12:33.284 Test: blockdev write read size > 128k ...passed 00:12:33.284 Test: blockdev write read invalid size ...passed 00:12:33.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.284 Test: blockdev write read max offset ...passed 00:12:33.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.284 Test: blockdev writev readv 8 blocks ...passed 00:12:33.284 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.284 Test: blockdev writev readv block ...passed 00:12:33.284 Test: blockdev writev readv size > 128k ...passed 00:12:33.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.284 Test: blockdev comparev and writev ...passed 00:12:33.284 Test: blockdev nvme passthru rw ...passed 00:12:33.284 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.284 Test: blockdev nvme admin passthru ...passed 00:12:33.284 Test: blockdev copy ...passed 00:12:33.284 Suite: bdevio tests on: Malloc2p0 00:12:33.284 Test: blockdev write read block ...passed 00:12:33.284 Test: blockdev write zeroes read block ...passed 00:12:33.284 Test: blockdev write zeroes read no split ...passed 00:12:33.284 Test: blockdev write zeroes read split ...passed 00:12:33.284 Test: blockdev write zeroes read split partial ...passed 00:12:33.284 Test: blockdev reset ...passed 00:12:33.284 Test: blockdev write read 8 blocks ...passed 00:12:33.284 Test: blockdev write read size > 128k ...passed 00:12:33.284 Test: blockdev write read invalid size ...passed 00:12:33.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.284 Test: blockdev write read max offset ...passed 00:12:33.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.284 Test: blockdev writev readv 8 blocks ...passed 00:12:33.284 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.284 Test: blockdev writev readv block ...passed 00:12:33.284 Test: blockdev writev readv size > 128k ...passed 00:12:33.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.284 Test: blockdev comparev and writev ...passed 00:12:33.284 Test: blockdev nvme passthru rw ...passed 00:12:33.285 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.285 Test: blockdev nvme admin passthru ...passed 00:12:33.285 Test: blockdev copy ...passed 00:12:33.285 Suite: bdevio tests on: Malloc1p1 00:12:33.285 Test: blockdev write read block ...passed 00:12:33.285 Test: blockdev write zeroes read block ...passed 00:12:33.285 Test: blockdev write zeroes read no split ...passed 00:12:33.285 Test: blockdev write zeroes read split ...passed 00:12:33.285 Test: blockdev write zeroes read split partial ...passed 00:12:33.285 Test: blockdev reset ...passed 00:12:33.285 Test: blockdev write read 8 blocks ...passed 00:12:33.285 Test: blockdev write read size > 128k ...passed 00:12:33.285 Test: blockdev write read invalid size ...passed 00:12:33.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.285 Test: blockdev write read max offset ...passed 00:12:33.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.285 Test: blockdev writev readv 8 blocks ...passed 00:12:33.285 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.285 Test: blockdev writev readv block ...passed 00:12:33.285 Test: blockdev writev readv size > 128k ...passed 00:12:33.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.285 Test: blockdev comparev and writev ...passed 00:12:33.285 Test: blockdev nvme passthru rw ...passed 00:12:33.285 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.285 Test: blockdev nvme admin passthru ...passed 00:12:33.285 Test: blockdev copy ...passed 00:12:33.285 Suite: bdevio tests on: Malloc1p0 00:12:33.285 Test: blockdev write read block ...passed 00:12:33.285 Test: blockdev write zeroes read block ...passed 00:12:33.285 Test: blockdev write zeroes read no split ...passed 00:12:33.285 Test: blockdev write zeroes read split ...passed 00:12:33.285 Test: blockdev write zeroes read split partial ...passed 00:12:33.285 Test: blockdev reset ...passed 00:12:33.285 Test: blockdev write read 8 blocks ...passed 00:12:33.285 Test: blockdev write read size > 128k ...passed 00:12:33.285 Test: blockdev write read invalid size ...passed 00:12:33.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.285 Test: blockdev write read max offset ...passed 00:12:33.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.285 Test: blockdev writev readv 8 blocks ...passed 00:12:33.285 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.285 Test: blockdev writev readv block ...passed 00:12:33.285 Test: blockdev writev readv size > 128k ...passed 00:12:33.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.285 Test: blockdev comparev and writev ...passed 00:12:33.285 Test: blockdev nvme passthru rw ...passed 00:12:33.285 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.285 Test: blockdev nvme admin passthru ...passed 00:12:33.285 Test: blockdev copy ...passed 00:12:33.285 Suite: bdevio tests on: Malloc0 00:12:33.285 Test: blockdev write read block ...passed 00:12:33.285 Test: blockdev write zeroes read block ...passed 00:12:33.285 Test: blockdev write zeroes read no split ...passed 00:12:33.285 Test: blockdev write zeroes read split ...passed 00:12:33.285 Test: blockdev write zeroes read split partial ...passed 00:12:33.285 Test: blockdev reset ...passed 00:12:33.285 Test: blockdev write read 8 blocks ...passed 00:12:33.285 Test: blockdev write read size > 128k ...passed 00:12:33.285 Test: blockdev write read invalid size ...passed 00:12:33.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:33.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:33.285 Test: blockdev write read max offset ...passed 00:12:33.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:33.285 Test: blockdev writev readv 8 blocks ...passed 00:12:33.285 Test: blockdev writev readv 30 x 1block ...passed 00:12:33.285 Test: blockdev writev readv block ...passed 00:12:33.285 Test: blockdev writev readv size > 128k ...passed 00:12:33.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:33.285 Test: blockdev comparev and writev ...passed 00:12:33.285 Test: blockdev nvme passthru rw ...passed 00:12:33.285 Test: blockdev nvme passthru vendor specific ...passed 00:12:33.285 Test: blockdev nvme admin passthru ...passed 00:12:33.285 Test: blockdev copy ...passed 00:12:33.285 00:12:33.285 Run Summary: Type Total Ran Passed Failed Inactive 00:12:33.285 suites 16 16 n/a 0 0 00:12:33.285 tests 368 368 368 0 0 00:12:33.285 asserts 2224 2224 2224 0 n/a 00:12:33.285 00:12:33.285 Elapsed time = 2.373 seconds 00:12:33.544 0 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 116651 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 116651 ']' 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 116651 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116651 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:33.544 killing process with pid 116651 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116651' 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 116651 00:12:33.544 11:24:08 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 116651 00:12:35.449 ************************************ 00:12:35.449 END TEST bdev_bounds 00:12:35.449 ************************************ 00:12:35.449 11:24:09 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:12:35.449 00:12:35.449 real 0m4.051s 00:12:35.449 user 0m10.041s 00:12:35.449 sys 0m0.605s 00:12:35.449 11:24:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.449 11:24:09 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:12:35.449 11:24:09 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:35.449 11:24:09 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:35.449 11:24:09 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:35.449 11:24:09 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.449 11:24:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:35.449 ************************************ 00:12:35.449 START TEST bdev_nbd 00:12:35.449 ************************************ 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=116738 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:35.449 11:24:09 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 116738 /var/tmp/spdk-nbd.sock 00:12:35.450 11:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 116738 ']' 00:12:35.450 11:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:35.450 11:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.450 11:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:35.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:35.450 11:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.450 11:24:09 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:35.450 [2024-07-13 11:24:09.879174] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:35.450 [2024-07-13 11:24:09.879675] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.450 [2024-07-13 11:24:10.050789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.708 [2024-07-13 11:24:10.241934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.967 [2024-07-13 11:24:10.608933] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:35.967 [2024-07-13 11:24:10.609340] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:35.967 [2024-07-13 11:24:10.616892] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:35.967 [2024-07-13 11:24:10.617099] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:35.967 [2024-07-13 11:24:10.624911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:35.967 [2024-07-13 11:24:10.625128] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:35.967 [2024-07-13 11:24:10.625255] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:36.226 [2024-07-13 11:24:10.816920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:36.226 [2024-07-13 11:24:10.817311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.226 [2024-07-13 11:24:10.817409] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:36.226 [2024-07-13 11:24:10.817690] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.226 [2024-07-13 11:24:10.820146] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.226 [2024-07-13 11:24:10.820359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:36.485 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.751 1+0 records in 00:12:36.751 1+0 records out 00:12:36.751 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622449 s, 6.6 MB/s 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:36.751 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.016 1+0 records in 00:12:37.016 1+0 records out 00:12:37.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263633 s, 15.5 MB/s 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:37.016 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.274 1+0 records in 00:12:37.274 1+0 records out 00:12:37.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366682 s, 11.2 MB/s 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:37.274 11:24:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.533 1+0 records in 00:12:37.533 1+0 records out 00:12:37.533 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281315 s, 14.6 MB/s 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:37.533 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:37.792 1+0 records in 00:12:37.792 1+0 records out 00:12:37.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407166 s, 10.1 MB/s 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:37.792 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.050 1+0 records in 00:12:38.050 1+0 records out 00:12:38.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000830592 s, 4.9 MB/s 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:38.050 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.051 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.051 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.308 1+0 records in 00:12:38.308 1+0 records out 00:12:38.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419898 s, 9.8 MB/s 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:38.308 11:24:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:38.309 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.309 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.309 11:24:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.567 1+0 records in 00:12:38.567 1+0 records out 00:12:38.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354413 s, 11.6 MB/s 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.567 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:38.825 1+0 records in 00:12:38.825 1+0 records out 00:12:38.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490906 s, 8.3 MB/s 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:38.825 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:39.083 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.083 1+0 records in 00:12:39.083 1+0 records out 00:12:39.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581363 s, 7.0 MB/s 00:12:39.084 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.084 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:39.084 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.084 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:39.084 11:24:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:39.084 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.084 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.084 11:24:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.342 1+0 records in 00:12:39.342 1+0 records out 00:12:39.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501891 s, 8.2 MB/s 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:39.342 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:39.343 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.343 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.343 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.602 1+0 records in 00:12:39.602 1+0 records out 00:12:39.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608988 s, 6.7 MB/s 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.602 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.860 1+0 records in 00:12:39.860 1+0 records out 00:12:39.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693561 s, 5.9 MB/s 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:39.860 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.129 1+0 records in 00:12:40.129 1+0 records out 00:12:40.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455791 s, 9.0 MB/s 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.129 11:24:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.392 1+0 records in 00:12:40.392 1+0 records out 00:12:40.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737852 s, 5.6 MB/s 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.392 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.958 1+0 records in 00:12:40.958 1+0 records out 00:12:40.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00215885 s, 1.9 MB/s 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd0", 00:12:40.958 "bdev_name": "Malloc0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd1", 00:12:40.958 "bdev_name": "Malloc1p0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd2", 00:12:40.958 "bdev_name": "Malloc1p1" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd3", 00:12:40.958 "bdev_name": "Malloc2p0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd4", 00:12:40.958 "bdev_name": "Malloc2p1" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd5", 00:12:40.958 "bdev_name": "Malloc2p2" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd6", 00:12:40.958 "bdev_name": "Malloc2p3" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd7", 00:12:40.958 "bdev_name": "Malloc2p4" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd8", 00:12:40.958 "bdev_name": "Malloc2p5" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd9", 00:12:40.958 "bdev_name": "Malloc2p6" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd10", 00:12:40.958 "bdev_name": "Malloc2p7" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd11", 00:12:40.958 "bdev_name": "TestPT" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd12", 00:12:40.958 "bdev_name": "raid0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd13", 00:12:40.958 "bdev_name": "concat0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd14", 00:12:40.958 "bdev_name": "raid1" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd15", 00:12:40.958 "bdev_name": "AIO0" 00:12:40.958 } 00:12:40.958 ]' 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd0", 00:12:40.958 "bdev_name": "Malloc0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd1", 00:12:40.958 "bdev_name": "Malloc1p0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd2", 00:12:40.958 "bdev_name": "Malloc1p1" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd3", 00:12:40.958 "bdev_name": "Malloc2p0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd4", 00:12:40.958 "bdev_name": "Malloc2p1" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd5", 00:12:40.958 "bdev_name": "Malloc2p2" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd6", 00:12:40.958 "bdev_name": "Malloc2p3" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd7", 00:12:40.958 "bdev_name": "Malloc2p4" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd8", 00:12:40.958 "bdev_name": "Malloc2p5" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd9", 00:12:40.958 "bdev_name": "Malloc2p6" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd10", 00:12:40.958 "bdev_name": "Malloc2p7" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd11", 00:12:40.958 "bdev_name": "TestPT" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd12", 00:12:40.958 "bdev_name": "raid0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd13", 00:12:40.958 "bdev_name": "concat0" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd14", 00:12:40.958 "bdev_name": "raid1" 00:12:40.958 }, 00:12:40.958 { 00:12:40.958 "nbd_device": "/dev/nbd15", 00:12:40.958 "bdev_name": "AIO0" 00:12:40.958 } 00:12:40.958 ]' 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.958 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:41.216 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:41.216 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:41.216 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:41.216 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.216 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.216 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:41.216 11:24:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:41.475 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:41.734 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:41.734 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.734 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:41.734 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.734 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.734 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.734 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.993 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.252 11:24:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.511 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:42.769 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:42.769 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:42.769 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:42.769 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.769 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.769 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:42.769 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:43.027 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:43.027 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.027 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:43.027 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:43.027 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.027 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.028 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.285 11:24:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.543 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:43.801 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:43.801 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:43.801 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:43.801 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.801 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.801 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:43.801 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:44.059 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:44.059 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.059 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:44.059 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.059 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.059 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.059 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.317 11:24:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:44.317 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:44.317 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:44.317 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:44.317 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.317 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.317 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:44.575 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:44.575 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:44.575 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.575 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:44.575 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.575 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.575 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.575 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.832 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:45.089 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:45.346 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:45.346 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:45.346 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.346 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.347 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:45.347 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.347 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.347 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.347 11:24:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:45.347 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:45.347 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:45.347 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:45.347 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.347 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.347 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:45.347 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:45.605 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:45.605 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.605 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:45.605 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.605 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.605 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.605 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.864 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.123 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.382 11:24:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:46.642 /dev/nbd0 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.642 1+0 records in 00:12:46.642 1+0 records out 00:12:46.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634245 s, 6.5 MB/s 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.642 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:46.900 /dev/nbd1 00:12:46.900 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:46.900 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:46.900 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:46.900 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.901 1+0 records in 00:12:46.901 1+0 records out 00:12:46.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403923 s, 10.1 MB/s 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:46.901 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:47.158 /dev/nbd10 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:47.158 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.158 1+0 records in 00:12:47.158 1+0 records out 00:12:47.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479434 s, 8.5 MB/s 00:12:47.159 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.159 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:47.159 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.159 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:47.159 11:24:21 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:47.159 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.159 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.159 11:24:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:47.417 /dev/nbd11 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.417 1+0 records in 00:12:47.417 1+0 records out 00:12:47.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351855 s, 11.6 MB/s 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.417 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:47.676 /dev/nbd12 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.676 1+0 records in 00:12:47.676 1+0 records out 00:12:47.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084712 s, 4.8 MB/s 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.676 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:47.936 /dev/nbd13 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:47.936 1+0 records in 00:12:47.936 1+0 records out 00:12:47.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456941 s, 9.0 MB/s 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:47.936 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:48.195 /dev/nbd14 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.195 1+0 records in 00:12:48.195 1+0 records out 00:12:48.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608385 s, 6.7 MB/s 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.195 11:24:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:48.453 /dev/nbd15 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.453 1+0 records in 00:12:48.453 1+0 records out 00:12:48.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464844 s, 8.8 MB/s 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.453 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:48.711 /dev/nbd2 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.711 1+0 records in 00:12:48.711 1+0 records out 00:12:48.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452294 s, 9.1 MB/s 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.711 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:48.969 /dev/nbd3 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:48.969 1+0 records in 00:12:48.969 1+0 records out 00:12:48.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544545 s, 7.5 MB/s 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:48.969 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:49.228 /dev/nbd4 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.228 1+0 records in 00:12:49.228 1+0 records out 00:12:49.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656681 s, 6.2 MB/s 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.228 11:24:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:49.486 /dev/nbd5 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.486 1+0 records in 00:12:49.486 1+0 records out 00:12:49.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052063 s, 7.9 MB/s 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.486 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:49.745 /dev/nbd6 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.745 1+0 records in 00:12:49.745 1+0 records out 00:12:49.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494198 s, 8.3 MB/s 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:49.745 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:50.004 /dev/nbd7 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.004 1+0 records in 00:12:50.004 1+0 records out 00:12:50.004 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551031 s, 7.4 MB/s 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.004 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:50.263 /dev/nbd8 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.263 1+0 records in 00:12:50.263 1+0 records out 00:12:50.263 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845277 s, 4.8 MB/s 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.263 11:24:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:50.522 /dev/nbd9 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.522 1+0 records in 00:12:50.522 1+0 records out 00:12:50.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000941825 s, 4.3 MB/s 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.522 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd0", 00:12:50.781 "bdev_name": "Malloc0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd1", 00:12:50.781 "bdev_name": "Malloc1p0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd10", 00:12:50.781 "bdev_name": "Malloc1p1" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd11", 00:12:50.781 "bdev_name": "Malloc2p0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd12", 00:12:50.781 "bdev_name": "Malloc2p1" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd13", 00:12:50.781 "bdev_name": "Malloc2p2" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd14", 00:12:50.781 "bdev_name": "Malloc2p3" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd15", 00:12:50.781 "bdev_name": "Malloc2p4" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd2", 00:12:50.781 "bdev_name": "Malloc2p5" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd3", 00:12:50.781 "bdev_name": "Malloc2p6" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd4", 00:12:50.781 "bdev_name": "Malloc2p7" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd5", 00:12:50.781 "bdev_name": "TestPT" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd6", 00:12:50.781 "bdev_name": "raid0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd7", 00:12:50.781 "bdev_name": "concat0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd8", 00:12:50.781 "bdev_name": "raid1" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd9", 00:12:50.781 "bdev_name": "AIO0" 00:12:50.781 } 00:12:50.781 ]' 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd0", 00:12:50.781 "bdev_name": "Malloc0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd1", 00:12:50.781 "bdev_name": "Malloc1p0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd10", 00:12:50.781 "bdev_name": "Malloc1p1" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd11", 00:12:50.781 "bdev_name": "Malloc2p0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd12", 00:12:50.781 "bdev_name": "Malloc2p1" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd13", 00:12:50.781 "bdev_name": "Malloc2p2" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd14", 00:12:50.781 "bdev_name": "Malloc2p3" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd15", 00:12:50.781 "bdev_name": "Malloc2p4" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd2", 00:12:50.781 "bdev_name": "Malloc2p5" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd3", 00:12:50.781 "bdev_name": "Malloc2p6" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd4", 00:12:50.781 "bdev_name": "Malloc2p7" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd5", 00:12:50.781 "bdev_name": "TestPT" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd6", 00:12:50.781 "bdev_name": "raid0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd7", 00:12:50.781 "bdev_name": "concat0" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd8", 00:12:50.781 "bdev_name": "raid1" 00:12:50.781 }, 00:12:50.781 { 00:12:50.781 "nbd_device": "/dev/nbd9", 00:12:50.781 "bdev_name": "AIO0" 00:12:50.781 } 00:12:50.781 ]' 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:50.781 /dev/nbd1 00:12:50.781 /dev/nbd10 00:12:50.781 /dev/nbd11 00:12:50.781 /dev/nbd12 00:12:50.781 /dev/nbd13 00:12:50.781 /dev/nbd14 00:12:50.781 /dev/nbd15 00:12:50.781 /dev/nbd2 00:12:50.781 /dev/nbd3 00:12:50.781 /dev/nbd4 00:12:50.781 /dev/nbd5 00:12:50.781 /dev/nbd6 00:12:50.781 /dev/nbd7 00:12:50.781 /dev/nbd8 00:12:50.781 /dev/nbd9' 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:50.781 /dev/nbd1 00:12:50.781 /dev/nbd10 00:12:50.781 /dev/nbd11 00:12:50.781 /dev/nbd12 00:12:50.781 /dev/nbd13 00:12:50.781 /dev/nbd14 00:12:50.781 /dev/nbd15 00:12:50.781 /dev/nbd2 00:12:50.781 /dev/nbd3 00:12:50.781 /dev/nbd4 00:12:50.781 /dev/nbd5 00:12:50.781 /dev/nbd6 00:12:50.781 /dev/nbd7 00:12:50.781 /dev/nbd8 00:12:50.781 /dev/nbd9' 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:50.781 256+0 records in 00:12:50.781 256+0 records out 00:12:50.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496058 s, 211 MB/s 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:50.781 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:51.040 256+0 records in 00:12:51.040 256+0 records out 00:12:51.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116962 s, 9.0 MB/s 00:12:51.040 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.040 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:51.040 256+0 records in 00:12:51.040 256+0 records out 00:12:51.040 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138251 s, 7.6 MB/s 00:12:51.040 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.040 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:51.308 256+0 records in 00:12:51.308 256+0 records out 00:12:51.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126081 s, 8.3 MB/s 00:12:51.308 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.308 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:51.308 256+0 records in 00:12:51.308 256+0 records out 00:12:51.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132404 s, 7.9 MB/s 00:12:51.308 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.308 11:24:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:51.581 256+0 records in 00:12:51.581 256+0 records out 00:12:51.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134506 s, 7.8 MB/s 00:12:51.581 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.581 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:51.581 256+0 records in 00:12:51.581 256+0 records out 00:12:51.581 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128713 s, 8.1 MB/s 00:12:51.581 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.581 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:51.840 256+0 records in 00:12:51.840 256+0 records out 00:12:51.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143223 s, 7.3 MB/s 00:12:51.840 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.840 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:51.840 256+0 records in 00:12:51.840 256+0 records out 00:12:51.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136624 s, 7.7 MB/s 00:12:51.840 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:51.840 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:52.098 256+0 records in 00:12:52.098 256+0 records out 00:12:52.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124664 s, 8.4 MB/s 00:12:52.098 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.098 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:52.098 256+0 records in 00:12:52.098 256+0 records out 00:12:52.098 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133685 s, 7.8 MB/s 00:12:52.098 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.098 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:52.357 256+0 records in 00:12:52.357 256+0 records out 00:12:52.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12222 s, 8.6 MB/s 00:12:52.357 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.357 11:24:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:52.357 256+0 records in 00:12:52.357 256+0 records out 00:12:52.357 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123607 s, 8.5 MB/s 00:12:52.357 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.357 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:52.616 256+0 records in 00:12:52.616 256+0 records out 00:12:52.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137904 s, 7.6 MB/s 00:12:52.616 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.616 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:52.616 256+0 records in 00:12:52.616 256+0 records out 00:12:52.616 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129043 s, 8.1 MB/s 00:12:52.616 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.616 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:52.875 256+0 records in 00:12:52.875 256+0 records out 00:12:52.875 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145362 s, 7.2 MB/s 00:12:52.875 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:52.875 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:53.134 256+0 records in 00:12:53.134 256+0 records out 00:12:53.134 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196827 s, 5.3 MB/s 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.134 11:24:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:53.391 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:53.392 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:53.392 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:53.392 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.392 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.392 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:53.392 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.392 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.392 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.392 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:53.650 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:53.908 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:53.908 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:53.908 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:53.908 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:53.908 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:53.908 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:53.908 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:54.167 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:54.167 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.167 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:54.167 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.167 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.167 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.167 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.426 11:24:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.685 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:54.943 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.200 11:24:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:55.457 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:55.742 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:55.742 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:55.742 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:55.742 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:55.742 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:55.742 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:55.742 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:56.000 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:56.000 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.000 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:56.000 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.000 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.001 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.001 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:56.259 11:24:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:56.517 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:56.517 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.517 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:56.517 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.517 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.517 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.517 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:56.775 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:56.775 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:56.775 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:56.776 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:57.034 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:57.034 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:57.034 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:57.034 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.034 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.034 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:57.034 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:57.292 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:57.292 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.292 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:57.292 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.292 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.292 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.292 11:24:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.551 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.810 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:58.068 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:58.068 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:58.068 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:58.068 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.068 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.068 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:58.068 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:58.326 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:58.326 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.326 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:58.326 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:58.326 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.326 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:58.326 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.326 11:24:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:58.326 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:58.326 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:58.326 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:58.583 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:58.841 malloc_lvol_verify 00:12:58.841 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:58.841 c45192d8-e919-441c-b628-7249501fc015 00:12:58.841 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:59.099 b793979e-ce7c-4dbb-a2bd-9806d32212c4 00:12:59.099 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:59.357 /dev/nbd0 00:12:59.357 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:59.357 mke2fs 1.45.5 (07-Jan-2020) 00:12:59.357 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:59.357 00:12:59.357 Allocating group tables: 0/1 done 00:12:59.357 Writing inode tables: 0/1 done 00:12:59.357 00:12:59.357 Filesystem too small for a journal 00:12:59.357 Writing superblocks and filesystem accounting information: 0/1 done 00:12:59.357 00:12:59.357 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:59.357 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:59.357 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:59.357 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:12:59.357 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.357 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:59.357 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.357 11:24:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 116738 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 116738 ']' 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 116738 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 116738 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 116738' 00:12:59.614 killing process with pid 116738 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 116738 00:12:59.614 11:24:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 116738 00:13:01.516 11:24:36 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:13:01.516 00:13:01.516 real 0m26.348s 00:13:01.516 user 0m34.532s 00:13:01.516 sys 0m9.192s 00:13:01.516 11:24:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.516 11:24:36 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:01.516 ************************************ 00:13:01.516 END TEST bdev_nbd 00:13:01.516 ************************************ 00:13:01.516 11:24:36 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:01.516 11:24:36 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:13:01.516 11:24:36 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:13:01.516 11:24:36 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:13:01.516 11:24:36 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:13:01.516 11:24:36 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:01.516 11:24:36 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.516 11:24:36 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:01.516 ************************************ 00:13:01.516 START TEST bdev_fio 00:13:01.516 ************************************ 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:01.516 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:01.516 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:13:01.775 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.776 11:24:36 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:01.776 ************************************ 00:13:01.776 START TEST bdev_fio_rw_verify 00:13:01.776 ************************************ 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:01.776 11:24:36 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:01.776 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:01.776 fio-3.35 00:13:01.776 Starting 16 threads 00:13:13.984 00:13:13.984 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=118013: Sat Jul 13 11:24:48 2024 00:13:13.984 read: IOPS=62.3k, BW=243MiB/s (255MB/s)(2434MiB/10001msec) 00:13:13.984 slat (usec): min=2, max=40006, avg=49.14, stdev=491.39 00:13:13.984 clat (usec): min=10, max=32486, avg=375.36, stdev=1359.97 00:13:13.984 lat (usec): min=27, max=40308, avg=424.50, stdev=1444.92 00:13:13.984 clat percentiles (usec): 00:13:13.984 | 50.000th=[ 239], 99.000th=[ 1188], 99.900th=[16450], 99.990th=[24249], 00:13:13.984 | 99.999th=[32375] 00:13:13.984 write: IOPS=97.7k, BW=382MiB/s (400MB/s)(3777MiB/9893msec); 0 zone resets 00:13:13.984 slat (usec): min=8, max=50146, avg=80.28, stdev=696.52 00:13:13.984 clat (usec): min=12, max=44373, avg=486.65, stdev=1682.95 00:13:13.984 lat (usec): min=44, max=50539, avg=566.93, stdev=1819.91 00:13:13.984 clat percentiles (usec): 00:13:13.984 | 50.000th=[ 289], 99.000th=[10421], 99.900th=[20579], 99.990th=[31851], 00:13:13.984 | 99.999th=[44303] 00:13:13.984 bw ( KiB/s): min=253292, max=603064, per=98.63%, avg=385627.89, stdev=6175.64, samples=304 00:13:13.984 iops : min=63323, max=150766, avg=96406.84, stdev=1543.91, samples=304 00:13:13.984 lat (usec) : 20=0.01%, 50=0.23%, 100=4.27%, 250=40.58%, 500=48.99% 00:13:13.984 lat (usec) : 750=3.77%, 1000=0.52% 00:13:13.984 lat (msec) : 2=0.39%, 4=0.08%, 10=0.22%, 20=0.85%, 50=0.10% 00:13:13.984 cpu : usr=57.61%, sys=2.37%, ctx=202819, majf=0, minf=67086 00:13:13.984 IO depths : 1=11.4%, 2=24.0%, 4=51.6%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:13.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.984 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:13.984 issued rwts: total=623131,966992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:13.984 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:13.984 00:13:13.984 Run status group 0 (all jobs): 00:13:13.984 READ: bw=243MiB/s (255MB/s), 243MiB/s-243MiB/s (255MB/s-255MB/s), io=2434MiB (2552MB), run=10001-10001msec 00:13:13.984 WRITE: bw=382MiB/s (400MB/s), 382MiB/s-382MiB/s (400MB/s-400MB/s), io=3777MiB (3961MB), run=9893-9893msec 00:13:15.891 ----------------------------------------------------- 00:13:15.891 Suppressions used: 00:13:15.891 count bytes template 00:13:15.891 16 140 /usr/src/fio/parse.c 00:13:15.891 10958 1051968 /usr/src/fio/iolog.c 00:13:15.891 2 596 libcrypto.so 00:13:15.891 ----------------------------------------------------- 00:13:15.891 00:13:15.891 00:13:15.891 real 0m14.034s 00:13:15.891 user 1m37.247s 00:13:15.891 sys 0m4.856s 00:13:15.891 11:24:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.891 11:24:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:15.891 ************************************ 00:13:15.891 END TEST bdev_fio_rw_verify 00:13:15.891 ************************************ 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:15.891 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:15.892 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "02ba153f-f20a-4260-a28b-759c401c3441"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "02ba153f-f20a-4260-a28b-759c401c3441",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "08bcc0cb-ad5b-5407-99bd-07e010240fe9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "08bcc0cb-ad5b-5407-99bd-07e010240fe9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2fd3dbdf-a937-55ff-a1d7-c9dd4e02f59c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2fd3dbdf-a937-55ff-a1d7-c9dd4e02f59c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b20cc092-197b-5b81-99b4-84fbd5c70353"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b20cc092-197b-5b81-99b4-84fbd5c70353",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7044642c-4a34-541a-96a0-cecadfd45c34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7044642c-4a34-541a-96a0-cecadfd45c34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "d04e9be6-dcad-595d-88cd-edd049854f77"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d04e9be6-dcad-595d-88cd-edd049854f77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "9d5b36b8-ce69-5974-bb89-e5274a530ba7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9d5b36b8-ce69-5974-bb89-e5274a530ba7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e0e5dd9a-ff68-5a3a-b5f3-e1348be685c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e0e5dd9a-ff68-5a3a-b5f3-e1348be685c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e264b714-16d4-5707-b59f-c7ec963fc613"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e264b714-16d4-5707-b59f-c7ec963fc613",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0d58cf2f-57a8-5f68-ac03-29d6a960cc6d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0d58cf2f-57a8-5f68-ac03-29d6a960cc6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "46d305f3-03ea-5a17-9d70-04932d1b7a11"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "46d305f3-03ea-5a17-9d70-04932d1b7a11",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "38d83a35-7eeb-5e4a-a379-3b7ce11354d1"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "38d83a35-7eeb-5e4a-a379-3b7ce11354d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5f84f301-6bb2-4171-82da-8344fd3f8edb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5f84f301-6bb2-4171-82da-8344fd3f8edb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5f84f301-6bb2-4171-82da-8344fd3f8edb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c93751ee-c454-47e7-99ab-55c8e770ab6e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "22578e49-5a60-4732-b1e4-25b1c5e3cc7f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "3830764e-5ed0-41b7-af79-26fed7cfb289"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3830764e-5ed0-41b7-af79-26fed7cfb289",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3830764e-5ed0-41b7-af79-26fed7cfb289",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "0c32c6fa-c81e-47cf-8bb4-daa13a9b8fe6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "064d1ce9-9b3e-4f14-8a66-8f4b3ee7ec81",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b85c66e7-7491-44ca-ba7d-1a26c749063f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b85c66e7-7491-44ca-ba7d-1a26c749063f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b85c66e7-7491-44ca-ba7d-1a26c749063f",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c8502de6-6ed0-4a41-8b75-ba16131f4788",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "f15ed462-f8da-46ba-8b87-3d71f7dd8b01",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "bb1c9cee-c65d-48f4-a88a-a7ff35549595"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "bb1c9cee-c65d-48f4-a88a-a7ff35549595",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:15.892 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:13:15.892 Malloc1p0 00:13:15.892 Malloc1p1 00:13:15.892 Malloc2p0 00:13:15.892 Malloc2p1 00:13:15.892 Malloc2p2 00:13:15.892 Malloc2p3 00:13:15.892 Malloc2p4 00:13:15.892 Malloc2p5 00:13:15.892 Malloc2p6 00:13:15.892 Malloc2p7 00:13:15.892 TestPT 00:13:15.892 raid0 00:13:15.892 concat0 ]] 00:13:15.892 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "02ba153f-f20a-4260-a28b-759c401c3441"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "02ba153f-f20a-4260-a28b-759c401c3441",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "08bcc0cb-ad5b-5407-99bd-07e010240fe9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "08bcc0cb-ad5b-5407-99bd-07e010240fe9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2fd3dbdf-a937-55ff-a1d7-c9dd4e02f59c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2fd3dbdf-a937-55ff-a1d7-c9dd4e02f59c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "b20cc092-197b-5b81-99b4-84fbd5c70353"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b20cc092-197b-5b81-99b4-84fbd5c70353",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "7044642c-4a34-541a-96a0-cecadfd45c34"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7044642c-4a34-541a-96a0-cecadfd45c34",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "d04e9be6-dcad-595d-88cd-edd049854f77"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d04e9be6-dcad-595d-88cd-edd049854f77",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "9d5b36b8-ce69-5974-bb89-e5274a530ba7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9d5b36b8-ce69-5974-bb89-e5274a530ba7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "e0e5dd9a-ff68-5a3a-b5f3-e1348be685c9"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e0e5dd9a-ff68-5a3a-b5f3-e1348be685c9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "e264b714-16d4-5707-b59f-c7ec963fc613"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e264b714-16d4-5707-b59f-c7ec963fc613",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0d58cf2f-57a8-5f68-ac03-29d6a960cc6d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0d58cf2f-57a8-5f68-ac03-29d6a960cc6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "46d305f3-03ea-5a17-9d70-04932d1b7a11"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "46d305f3-03ea-5a17-9d70-04932d1b7a11",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "38d83a35-7eeb-5e4a-a379-3b7ce11354d1"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "38d83a35-7eeb-5e4a-a379-3b7ce11354d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "5f84f301-6bb2-4171-82da-8344fd3f8edb"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "5f84f301-6bb2-4171-82da-8344fd3f8edb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "5f84f301-6bb2-4171-82da-8344fd3f8edb",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c93751ee-c454-47e7-99ab-55c8e770ab6e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "22578e49-5a60-4732-b1e4-25b1c5e3cc7f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "3830764e-5ed0-41b7-af79-26fed7cfb289"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3830764e-5ed0-41b7-af79-26fed7cfb289",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "3830764e-5ed0-41b7-af79-26fed7cfb289",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "0c32c6fa-c81e-47cf-8bb4-daa13a9b8fe6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "064d1ce9-9b3e-4f14-8a66-8f4b3ee7ec81",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b85c66e7-7491-44ca-ba7d-1a26c749063f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b85c66e7-7491-44ca-ba7d-1a26c749063f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b85c66e7-7491-44ca-ba7d-1a26c749063f",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c8502de6-6ed0-4a41-8b75-ba16131f4788",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "f15ed462-f8da-46ba-8b87-3d71f7dd8b01",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "bb1c9cee-c65d-48f4-a88a-a7ff35549595"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "bb1c9cee-c65d-48f4-a88a-a7ff35549595",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:13:15.893 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.894 11:24:50 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:15.894 ************************************ 00:13:15.894 START TEST bdev_fio_trim 00:13:15.894 ************************************ 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:15.894 11:24:50 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:16.153 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:16.153 fio-3.35 00:13:16.153 Starting 14 threads 00:13:28.471 00:13:28.471 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=118246: Sat Jul 13 11:25:02 2024 00:13:28.471 write: IOPS=117k, BW=456MiB/s (478MB/s)(4562MiB/10003msec); 0 zone resets 00:13:28.471 slat (usec): min=2, max=32044, avg=44.12, stdev=427.40 00:13:28.471 clat (usec): min=17, max=34148, avg=294.96, stdev=1123.85 00:13:28.471 lat (usec): min=23, max=34168, avg=339.08, stdev=1201.99 00:13:28.471 clat percentiles (usec): 00:13:28.471 | 50.000th=[ 202], 99.000th=[ 424], 99.900th=[16319], 99.990th=[20579], 00:13:28.471 | 99.999th=[28443] 00:13:28.471 bw ( KiB/s): min=324824, max=675770, per=100.00%, avg=468013.58, stdev=8902.75, samples=266 00:13:28.471 iops : min=81206, max=168942, avg=117003.21, stdev=2225.68, samples=266 00:13:28.471 trim: IOPS=117k, BW=456MiB/s (478MB/s)(4562MiB/10003msec); 0 zone resets 00:13:28.471 slat (usec): min=4, max=28028, avg=29.50, stdev=348.93 00:13:28.471 clat (usec): min=3, max=34133, avg=331.98, stdev=1187.75 00:13:28.471 lat (usec): min=10, max=34147, avg=361.49, stdev=1237.75 00:13:28.471 clat percentiles (usec): 00:13:28.471 | 50.000th=[ 231], 99.000th=[ 469], 99.900th=[16319], 99.990th=[21890], 00:13:28.471 | 99.999th=[30278] 00:13:28.471 bw ( KiB/s): min=324832, max=675770, per=100.00%, avg=468013.58, stdev=8902.80, samples=266 00:13:28.471 iops : min=81208, max=168942, avg=117003.21, stdev=2225.69, samples=266 00:13:28.471 lat (usec) : 4=0.01%, 10=0.01%, 20=0.03%, 50=0.52%, 100=4.83% 00:13:28.471 lat (usec) : 250=58.43%, 500=35.38%, 750=0.07%, 1000=0.03% 00:13:28.471 lat (msec) : 2=0.02%, 4=0.01%, 10=0.05%, 20=0.60%, 50=0.03% 00:13:28.471 cpu : usr=68.94%, sys=0.43%, ctx=170547, majf=0, minf=898 00:13:28.471 IO depths : 1=12.4%, 2=24.8%, 4=50.2%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:28.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.471 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:28.471 issued rwts: total=0,1167790,1167794,0 short=0,0,0,0 dropped=0,0,0,0 00:13:28.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:28.471 00:13:28.471 Run status group 0 (all jobs): 00:13:28.471 WRITE: bw=456MiB/s (478MB/s), 456MiB/s-456MiB/s (478MB/s-478MB/s), io=4562MiB (4783MB), run=10003-10003msec 00:13:28.471 TRIM: bw=456MiB/s (478MB/s), 456MiB/s-456MiB/s (478MB/s-478MB/s), io=4562MiB (4783MB), run=10003-10003msec 00:13:29.846 ----------------------------------------------------- 00:13:29.846 Suppressions used: 00:13:29.846 count bytes template 00:13:29.846 14 129 /usr/src/fio/parse.c 00:13:29.846 2 596 libcrypto.so 00:13:29.846 ----------------------------------------------------- 00:13:29.846 00:13:29.846 00:13:29.846 real 0m13.701s 00:13:29.846 user 1m41.478s 00:13:29.846 sys 0m1.496s 00:13:29.846 11:25:04 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.846 11:25:04 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:13:29.846 ************************************ 00:13:29.846 END TEST bdev_fio_trim 00:13:29.846 ************************************ 00:13:29.846 11:25:04 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:13:29.846 11:25:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:13:29.846 11:25:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:29.846 /home/vagrant/spdk_repo/spdk 00:13:29.846 11:25:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:13:29.846 11:25:04 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:13:29.846 00:13:29.846 real 0m28.083s 00:13:29.846 user 3m18.935s 00:13:29.846 sys 0m6.477s 00:13:29.846 11:25:04 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.846 11:25:04 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:29.846 ************************************ 00:13:29.846 END TEST bdev_fio 00:13:29.846 ************************************ 00:13:29.846 11:25:04 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:29.846 11:25:04 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:29.846 11:25:04 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:29.846 11:25:04 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:13:29.846 11:25:04 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.846 11:25:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:29.846 ************************************ 00:13:29.846 START TEST bdev_verify 00:13:29.847 ************************************ 00:13:29.847 11:25:04 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:29.847 [2024-07-13 11:25:04.419886] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:29.847 [2024-07-13 11:25:04.420102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118446 ] 00:13:30.106 [2024-07-13 11:25:04.596194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:30.106 [2024-07-13 11:25:04.782408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.106 [2024-07-13 11:25:04.782406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.674 [2024-07-13 11:25:05.160288] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:30.674 [2024-07-13 11:25:05.160398] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:30.674 [2024-07-13 11:25:05.168228] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:30.674 [2024-07-13 11:25:05.168296] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:30.674 [2024-07-13 11:25:05.176256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:30.674 [2024-07-13 11:25:05.176388] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:30.674 [2024-07-13 11:25:05.176422] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:30.674 [2024-07-13 11:25:05.363535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:30.674 [2024-07-13 11:25:05.363656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.674 [2024-07-13 11:25:05.363702] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:30.674 [2024-07-13 11:25:05.363728] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.674 [2024-07-13 11:25:05.366381] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.674 [2024-07-13 11:25:05.366427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:31.240 Running I/O for 5 seconds... 00:13:36.507 00:13:36.507 Latency(us) 00:13:36.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.507 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x0 length 0x1000 00:13:36.507 Malloc0 : 5.16 1512.01 5.91 0.00 0.00 84558.52 577.16 158239.65 00:13:36.507 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x1000 length 0x1000 00:13:36.507 Malloc0 : 5.14 1468.15 5.73 0.00 0.00 86897.38 517.59 306946.79 00:13:36.507 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x0 length 0x800 00:13:36.507 Malloc1p0 : 5.17 767.89 3.00 0.00 0.00 166265.94 1899.05 155379.90 00:13:36.507 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x800 length 0x800 00:13:36.507 Malloc1p0 : 5.15 771.13 3.01 0.00 0.00 165220.95 2561.86 144894.14 00:13:36.507 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x0 length 0x800 00:13:36.507 Malloc1p1 : 5.17 767.43 3.00 0.00 0.00 166063.38 2591.65 155379.90 00:13:36.507 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x800 length 0x800 00:13:36.507 Malloc1p1 : 5.15 770.83 3.01 0.00 0.00 164977.66 2383.13 145847.39 00:13:36.507 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x0 length 0x200 00:13:36.507 Malloc2p0 : 5.17 767.18 3.00 0.00 0.00 165803.38 2427.81 152520.15 00:13:36.507 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x200 length 0x200 00:13:36.507 Malloc2p0 : 5.15 770.57 3.01 0.00 0.00 164695.91 2442.71 141081.13 00:13:36.507 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x0 length 0x200 00:13:36.507 Malloc2p1 : 5.17 766.92 3.00 0.00 0.00 165530.48 2457.60 148707.14 00:13:36.507 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x200 length 0x200 00:13:36.507 Malloc2p1 : 5.15 770.31 3.01 0.00 0.00 164435.84 2457.60 140127.88 00:13:36.507 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x0 length 0x200 00:13:36.507 Malloc2p2 : 5.18 766.66 2.99 0.00 0.00 165270.78 2457.60 146800.64 00:13:36.507 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x200 length 0x200 00:13:36.507 Malloc2p2 : 5.15 770.05 3.01 0.00 0.00 164167.21 2398.02 136314.88 00:13:36.507 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.507 Verification LBA range: start 0x0 length 0x200 00:13:36.508 Malloc2p3 : 5.18 766.41 2.99 0.00 0.00 165021.60 2412.92 142987.64 00:13:36.508 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x200 length 0x200 00:13:36.508 Malloc2p3 : 5.15 769.79 3.01 0.00 0.00 163887.36 2502.28 133455.13 00:13:36.508 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x0 length 0x200 00:13:36.508 Malloc2p4 : 5.18 766.16 2.99 0.00 0.00 164736.43 2681.02 141081.13 00:13:36.508 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x200 length 0x200 00:13:36.508 Malloc2p4 : 5.16 769.51 3.01 0.00 0.00 163616.91 2561.86 131548.63 00:13:36.508 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x0 length 0x200 00:13:36.508 Malloc2p5 : 5.18 765.86 2.99 0.00 0.00 164473.44 2442.71 139174.63 00:13:36.508 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x200 length 0x200 00:13:36.508 Malloc2p5 : 5.16 769.25 3.00 0.00 0.00 163374.76 2502.28 130595.37 00:13:36.508 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x0 length 0x200 00:13:36.508 Malloc2p6 : 5.18 765.58 2.99 0.00 0.00 164226.29 2532.07 136314.88 00:13:36.508 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x200 length 0x200 00:13:36.508 Malloc2p6 : 5.16 768.98 3.00 0.00 0.00 163103.79 2472.49 125829.12 00:13:36.508 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x0 length 0x200 00:13:36.508 Malloc2p7 : 5.18 765.31 2.99 0.00 0.00 163964.02 2472.49 131548.63 00:13:36.508 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x200 length 0x200 00:13:36.508 Malloc2p7 : 5.16 768.73 3.00 0.00 0.00 162833.19 1936.29 121062.87 00:13:36.508 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x0 length 0x1000 00:13:36.508 TestPT : 5.19 763.85 2.98 0.00 0.00 163849.29 7804.74 129642.12 00:13:36.508 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x1000 length 0x1000 00:13:36.508 TestPT : 5.16 743.63 2.90 0.00 0.00 167995.64 31695.59 179211.17 00:13:36.508 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x0 length 0x2000 00:13:36.508 raid0 : 5.19 764.74 2.99 0.00 0.00 163392.88 1861.82 126782.37 00:13:36.508 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x2000 length 0x2000 00:13:36.508 raid0 : 5.17 767.90 3.00 0.00 0.00 162397.79 3008.70 113436.86 00:13:36.508 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x0 length 0x2000 00:13:36.508 concat0 : 5.19 764.33 2.99 0.00 0.00 163187.70 2993.80 123922.62 00:13:36.508 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x2000 length 0x2000 00:13:36.508 concat0 : 5.19 789.77 3.09 0.00 0.00 157594.33 2591.65 118203.11 00:13:36.508 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x0 length 0x1000 00:13:36.508 raid1 : 5.19 764.11 2.98 0.00 0.00 162883.80 3053.38 119156.36 00:13:36.508 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x1000 length 0x1000 00:13:36.508 raid1 : 5.19 789.43 3.08 0.00 0.00 157350.00 2398.02 123922.62 00:13:36.508 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x0 length 0x4e2 00:13:36.508 AIO0 : 5.20 763.73 2.98 0.00 0.00 162556.59 1414.98 121539.49 00:13:36.508 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:36.508 Verification LBA range: start 0x4e2 length 0x4e2 00:13:36.508 AIO0 : 5.18 766.09 2.99 0.00 0.00 165689.11 1675.64 156333.15 00:13:36.508 =================================================================================================================== 00:13:36.508 Total : 26022.28 101.65 0.00 0.00 155002.40 517.59 306946.79 00:13:38.410 00:13:38.410 real 0m8.577s 00:13:38.410 user 0m15.417s 00:13:38.410 sys 0m0.649s 00:13:38.410 11:25:12 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:38.410 11:25:12 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:38.410 ************************************ 00:13:38.410 END TEST bdev_verify 00:13:38.410 ************************************ 00:13:38.410 11:25:12 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:38.410 11:25:12 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:38.410 11:25:12 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:13:38.410 11:25:12 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.410 11:25:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:38.410 ************************************ 00:13:38.410 START TEST bdev_verify_big_io 00:13:38.410 ************************************ 00:13:38.410 11:25:12 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:38.410 [2024-07-13 11:25:13.037164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:38.410 [2024-07-13 11:25:13.037400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118592 ] 00:13:38.668 [2024-07-13 11:25:13.213854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:38.668 [2024-07-13 11:25:13.403858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.668 [2024-07-13 11:25:13.403864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.235 [2024-07-13 11:25:13.776144] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:39.235 [2024-07-13 11:25:13.776232] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:39.235 [2024-07-13 11:25:13.784090] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:39.235 [2024-07-13 11:25:13.784138] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:39.235 [2024-07-13 11:25:13.792120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:39.235 [2024-07-13 11:25:13.792234] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:39.235 [2024-07-13 11:25:13.792261] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:39.494 [2024-07-13 11:25:13.986292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:39.494 [2024-07-13 11:25:13.986469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.494 [2024-07-13 11:25:13.986520] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:39.494 [2024-07-13 11:25:13.986580] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.494 [2024-07-13 11:25:13.989527] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.494 [2024-07-13 11:25:13.989573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:39.753 [2024-07-13 11:25:14.354404] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.357451] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.361350] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.365004] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.368055] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.371977] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.375005] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.378711] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.381739] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.385592] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.388543] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.392333] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.395276] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.399147] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.402875] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.405847] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:13:39.753 [2024-07-13 11:25:14.483123] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:39.753 [2024-07-13 11:25:14.489291] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:13:40.012 Running I/O for 5 seconds... 00:13:46.574 00:13:46.574 Latency(us) 00:13:46.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.574 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x0 length 0x100 00:13:46.574 Malloc0 : 5.38 380.64 23.79 0.00 0.00 332574.01 726.11 1265917.21 00:13:46.574 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x100 length 0x100 00:13:46.574 Malloc0 : 6.11 397.76 24.86 0.00 0.00 264194.67 662.81 364141.85 00:13:46.574 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x0 length 0x80 00:13:46.574 Malloc1p0 : 5.80 68.93 4.31 0.00 0.00 1763359.47 1243.69 2577590.46 00:13:46.574 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x80 length 0x80 00:13:46.574 Malloc1p0 : 5.92 153.38 9.59 0.00 0.00 794507.67 2249.08 1700599.62 00:13:46.574 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x0 length 0x80 00:13:46.574 Malloc1p1 : 5.80 68.92 4.31 0.00 0.00 1733102.58 1228.80 2486078.37 00:13:46.574 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x80 length 0x80 00:13:46.574 Malloc1p1 : 6.12 54.92 3.43 0.00 0.00 2196269.91 1303.27 3370695.21 00:13:46.574 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x0 length 0x20 00:13:46.574 Malloc2p0 : 5.61 57.09 3.57 0.00 0.00 526489.74 1064.96 838860.80 00:13:46.574 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x20 length 0x20 00:13:46.574 Malloc2p0 : 5.79 41.48 2.59 0.00 0.00 723286.33 577.16 1273543.21 00:13:46.574 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x0 length 0x20 00:13:46.574 Malloc2p1 : 5.61 57.07 3.57 0.00 0.00 524242.55 770.79 827421.79 00:13:46.574 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x20 length 0x20 00:13:46.574 Malloc2p1 : 5.79 41.47 2.59 0.00 0.00 719545.98 1035.17 1258291.20 00:13:46.574 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x0 length 0x20 00:13:46.574 Malloc2p2 : 5.61 57.05 3.57 0.00 0.00 522029.20 621.85 812169.77 00:13:46.574 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x20 length 0x20 00:13:46.574 Malloc2p2 : 5.79 41.46 2.59 0.00 0.00 715749.15 588.33 1243039.19 00:13:46.574 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x0 length 0x20 00:13:46.574 Malloc2p3 : 5.61 57.04 3.56 0.00 0.00 519774.38 629.29 800730.76 00:13:46.574 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x20 length 0x20 00:13:46.574 Malloc2p3 : 5.79 41.45 2.59 0.00 0.00 711826.32 577.16 1227787.17 00:13:46.574 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x0 length 0x20 00:13:46.574 Malloc2p4 : 5.61 57.03 3.56 0.00 0.00 517571.47 621.85 789291.75 00:13:46.574 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x20 length 0x20 00:13:46.574 Malloc2p4 : 5.79 41.44 2.59 0.00 0.00 707803.80 577.16 1212535.16 00:13:46.574 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x0 length 0x20 00:13:46.574 Malloc2p5 : 5.61 57.01 3.56 0.00 0.00 515122.81 752.17 777852.74 00:13:46.574 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.574 Verification LBA range: start 0x20 length 0x20 00:13:46.574 Malloc2p5 : 5.79 41.43 2.59 0.00 0.00 704138.54 692.60 1197283.14 00:13:46.575 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x0 length 0x20 00:13:46.575 Malloc2p6 : 5.61 57.00 3.56 0.00 0.00 512645.70 618.12 766413.73 00:13:46.575 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x20 length 0x20 00:13:46.575 Malloc2p6 : 5.92 43.23 2.70 0.00 0.00 676045.39 584.61 1182031.13 00:13:46.575 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x0 length 0x20 00:13:46.575 Malloc2p7 : 5.62 56.99 3.56 0.00 0.00 510592.15 636.74 754974.72 00:13:46.575 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x20 length 0x20 00:13:46.575 Malloc2p7 : 5.92 43.22 2.70 0.00 0.00 672265.90 569.72 1166779.11 00:13:46.575 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x0 length 0x100 00:13:46.575 TestPT : 5.88 71.47 4.47 0.00 0.00 1592885.11 58863.24 2089525.99 00:13:46.575 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x100 length 0x100 00:13:46.575 TestPT : 6.11 55.01 3.44 0.00 0.00 2069357.36 127735.62 2791118.66 00:13:46.575 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x0 length 0x200 00:13:46.575 raid0 : 5.91 75.86 4.74 0.00 0.00 1475073.82 1325.61 2226794.12 00:13:46.575 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x200 length 0x200 00:13:46.575 raid0 : 6.11 57.62 3.60 0.00 0.00 1932117.04 1258.59 3019898.88 00:13:46.575 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x0 length 0x200 00:13:46.575 concat0 : 5.91 81.17 5.07 0.00 0.00 1366041.93 1310.72 2135282.04 00:13:46.575 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x200 length 0x200 00:13:46.575 concat0 : 6.06 60.69 3.79 0.00 0.00 1813615.13 1288.38 2913134.78 00:13:46.575 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x0 length 0x100 00:13:46.575 raid1 : 5.88 87.07 5.44 0.00 0.00 1255933.68 1675.64 2059021.96 00:13:46.575 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x100 length 0x100 00:13:46.575 raid1 : 6.07 69.24 4.33 0.00 0.00 1561449.13 1765.00 2821622.69 00:13:46.575 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x0 length 0x4e 00:13:46.575 AIO0 : 5.91 96.76 6.05 0.00 0.00 683001.30 1422.43 1166779.11 00:13:46.575 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:13:46.575 Verification LBA range: start 0x4e length 0x4e 00:13:46.575 AIO0 : 6.11 64.80 4.05 0.00 0.00 1002405.81 1712.87 1708225.63 00:13:46.575 =================================================================================================================== 00:13:46.575 Total : 2635.70 164.73 0.00 0.00 847252.91 569.72 3370695.21 00:13:48.480 00:13:48.480 real 0m9.853s 00:13:48.480 user 0m17.978s 00:13:48.480 sys 0m0.637s 00:13:48.480 11:25:22 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.480 11:25:22 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.480 ************************************ 00:13:48.480 END TEST bdev_verify_big_io 00:13:48.480 ************************************ 00:13:48.480 11:25:22 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:48.480 11:25:22 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.480 11:25:22 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:48.480 11:25:22 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.480 11:25:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:48.480 ************************************ 00:13:48.480 START TEST bdev_write_zeroes 00:13:48.480 ************************************ 00:13:48.480 11:25:22 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.480 [2024-07-13 11:25:22.940306] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:48.480 [2024-07-13 11:25:22.941130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118752 ] 00:13:48.480 [2024-07-13 11:25:23.107935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.738 [2024-07-13 11:25:23.309614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.996 [2024-07-13 11:25:23.679678] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.996 [2024-07-13 11:25:23.679768] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:13:48.996 [2024-07-13 11:25:23.687622] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.996 [2024-07-13 11:25:23.687671] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:13:48.996 [2024-07-13 11:25:23.695640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:48.996 [2024-07-13 11:25:23.695705] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:13:48.996 [2024-07-13 11:25:23.695758] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:13:49.255 [2024-07-13 11:25:23.893338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:13:49.255 [2024-07-13 11:25:23.893453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.255 [2024-07-13 11:25:23.893487] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:49.255 [2024-07-13 11:25:23.893550] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.255 [2024-07-13 11:25:23.896014] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.255 [2024-07-13 11:25:23.896062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:13:49.821 Running I/O for 1 seconds... 00:13:50.759 00:13:50.759 Latency(us) 00:13:50.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.759 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc0 : 1.03 5574.03 21.77 0.00 0.00 22953.39 752.17 38606.66 00:13:50.759 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc1p0 : 1.03 5567.68 21.75 0.00 0.00 22931.48 983.04 37415.10 00:13:50.759 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc1p1 : 1.04 5561.76 21.73 0.00 0.00 22912.84 886.23 36700.16 00:13:50.759 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc2p0 : 1.04 5555.98 21.70 0.00 0.00 22897.76 960.70 35746.91 00:13:50.759 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc2p1 : 1.04 5550.37 21.68 0.00 0.00 22869.72 904.84 35746.91 00:13:50.759 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc2p2 : 1.04 5544.29 21.66 0.00 0.00 22849.02 938.36 36461.85 00:13:50.759 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc2p3 : 1.04 5538.60 21.64 0.00 0.00 22821.74 901.12 36938.47 00:13:50.759 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc2p4 : 1.04 5533.00 21.61 0.00 0.00 22805.50 960.70 37653.41 00:13:50.759 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc2p5 : 1.04 5527.41 21.59 0.00 0.00 22785.31 904.84 37653.41 00:13:50.759 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc2p6 : 1.04 5521.77 21.57 0.00 0.00 22764.21 953.25 36461.85 00:13:50.759 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 Malloc2p7 : 1.04 5516.20 21.55 0.00 0.00 22744.20 919.74 35270.28 00:13:50.759 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 TestPT : 1.05 5510.62 21.53 0.00 0.00 22723.45 953.25 33840.41 00:13:50.759 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 raid0 : 1.06 5570.09 21.76 0.00 0.00 22429.29 1534.14 33840.41 00:13:50.759 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 concat0 : 1.06 5563.57 21.73 0.00 0.00 22382.17 1482.01 33602.09 00:13:50.759 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 raid1 : 1.06 5555.36 21.70 0.00 0.00 22335.42 2457.60 33363.78 00:13:50.759 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.759 AIO0 : 1.06 5545.76 21.66 0.00 0.00 22267.33 1414.98 33363.78 00:13:50.759 =================================================================================================================== 00:13:50.759 Total : 88736.49 346.63 0.00 0.00 22715.07 752.17 38606.66 00:13:52.659 00:13:52.659 real 0m4.137s 00:13:52.659 user 0m3.411s 00:13:52.659 sys 0m0.530s 00:13:52.659 11:25:27 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:52.659 11:25:27 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:52.659 ************************************ 00:13:52.659 END TEST bdev_write_zeroes 00:13:52.659 ************************************ 00:13:52.659 11:25:27 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:52.659 11:25:27 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.659 11:25:27 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:52.659 11:25:27 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.659 11:25:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:52.659 ************************************ 00:13:52.659 START TEST bdev_json_nonenclosed 00:13:52.659 ************************************ 00:13:52.659 11:25:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:52.659 [2024-07-13 11:25:27.141227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:52.659 [2024-07-13 11:25:27.141692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118838 ] 00:13:52.659 [2024-07-13 11:25:27.311732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.918 [2024-07-13 11:25:27.478146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.918 [2024-07-13 11:25:27.478268] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:52.918 [2024-07-13 11:25:27.478325] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:52.918 [2024-07-13 11:25:27.478354] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:53.176 00:13:53.176 real 0m0.719s 00:13:53.176 user 0m0.478s 00:13:53.176 sys 0m0.136s 00:13:53.176 11:25:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:13:53.176 11:25:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.176 ************************************ 00:13:53.176 END TEST bdev_json_nonenclosed 00:13:53.176 ************************************ 00:13:53.176 11:25:27 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:53.176 11:25:27 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:13:53.176 11:25:27 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:13:53.176 11:25:27 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.176 11:25:27 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:13:53.176 11:25:27 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.176 11:25:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:53.176 ************************************ 00:13:53.176 START TEST bdev_json_nonarray 00:13:53.176 ************************************ 00:13:53.176 11:25:27 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:53.176 [2024-07-13 11:25:27.918735] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:53.176 [2024-07-13 11:25:27.918971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118869 ] 00:13:53.433 [2024-07-13 11:25:28.089155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.692 [2024-07-13 11:25:28.261821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.692 [2024-07-13 11:25:28.261959] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:53.692 [2024-07-13 11:25:28.262020] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:53.692 [2024-07-13 11:25:28.262049] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:53.951 00:13:53.951 real 0m0.731s 00:13:53.951 user 0m0.458s 00:13:53.951 sys 0m0.173s 00:13:53.951 11:25:28 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:13:53.951 ************************************ 00:13:53.951 END TEST bdev_json_nonarray 00:13:53.951 ************************************ 00:13:53.951 11:25:28 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.951 11:25:28 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:53.951 11:25:28 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:13:53.951 11:25:28 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:13:53.951 11:25:28 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:13:53.951 11:25:28 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:13:53.951 11:25:28 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:53.951 11:25:28 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.951 11:25:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:53.951 ************************************ 00:13:53.951 START TEST bdev_qos 00:13:53.951 ************************************ 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=118907 00:13:53.951 Process qos testing pid: 118907 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 118907' 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 118907 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 118907 ']' 00:13:53.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.951 11:25:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:54.210 [2024-07-13 11:25:28.712291] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:54.210 [2024-07-13 11:25:28.712505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118907 ] 00:13:54.210 [2024-07-13 11:25:28.885225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.468 [2024-07-13 11:25:29.091013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.036 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.036 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:13:55.036 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:55.036 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.036 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:55.294 Malloc_0 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.294 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:55.294 [ 00:13:55.294 { 00:13:55.294 "name": "Malloc_0", 00:13:55.294 "aliases": [ 00:13:55.294 "cc2ac11c-15f7-461b-80b4-37cca28adcad" 00:13:55.294 ], 00:13:55.294 "product_name": "Malloc disk", 00:13:55.294 "block_size": 512, 00:13:55.294 "num_blocks": 262144, 00:13:55.294 "uuid": "cc2ac11c-15f7-461b-80b4-37cca28adcad", 00:13:55.294 "assigned_rate_limits": { 00:13:55.294 "rw_ios_per_sec": 0, 00:13:55.294 "rw_mbytes_per_sec": 0, 00:13:55.294 "r_mbytes_per_sec": 0, 00:13:55.294 "w_mbytes_per_sec": 0 00:13:55.294 }, 00:13:55.294 "claimed": false, 00:13:55.294 "zoned": false, 00:13:55.294 "supported_io_types": { 00:13:55.294 "read": true, 00:13:55.294 "write": true, 00:13:55.294 "unmap": true, 00:13:55.294 "flush": true, 00:13:55.294 "reset": true, 00:13:55.294 "nvme_admin": false, 00:13:55.294 "nvme_io": false, 00:13:55.295 "nvme_io_md": false, 00:13:55.295 "write_zeroes": true, 00:13:55.295 "zcopy": true, 00:13:55.295 "get_zone_info": false, 00:13:55.295 "zone_management": false, 00:13:55.295 "zone_append": false, 00:13:55.295 "compare": false, 00:13:55.295 "compare_and_write": false, 00:13:55.295 "abort": true, 00:13:55.295 "seek_hole": false, 00:13:55.295 "seek_data": false, 00:13:55.295 "copy": true, 00:13:55.295 "nvme_iov_md": false 00:13:55.295 }, 00:13:55.295 "memory_domains": [ 00:13:55.295 { 00:13:55.295 "dma_device_id": "system", 00:13:55.295 "dma_device_type": 1 00:13:55.295 }, 00:13:55.295 { 00:13:55.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.295 "dma_device_type": 2 00:13:55.295 } 00:13:55.295 ], 00:13:55.295 "driver_specific": {} 00:13:55.295 } 00:13:55.295 ] 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:55.295 Null_1 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:55.295 [ 00:13:55.295 { 00:13:55.295 "name": "Null_1", 00:13:55.295 "aliases": [ 00:13:55.295 "a4ba70ca-98fb-4aae-a89c-84d7fac9da68" 00:13:55.295 ], 00:13:55.295 "product_name": "Null disk", 00:13:55.295 "block_size": 512, 00:13:55.295 "num_blocks": 262144, 00:13:55.295 "uuid": "a4ba70ca-98fb-4aae-a89c-84d7fac9da68", 00:13:55.295 "assigned_rate_limits": { 00:13:55.295 "rw_ios_per_sec": 0, 00:13:55.295 "rw_mbytes_per_sec": 0, 00:13:55.295 "r_mbytes_per_sec": 0, 00:13:55.295 "w_mbytes_per_sec": 0 00:13:55.295 }, 00:13:55.295 "claimed": false, 00:13:55.295 "zoned": false, 00:13:55.295 "supported_io_types": { 00:13:55.295 "read": true, 00:13:55.295 "write": true, 00:13:55.295 "unmap": false, 00:13:55.295 "flush": false, 00:13:55.295 "reset": true, 00:13:55.295 "nvme_admin": false, 00:13:55.295 "nvme_io": false, 00:13:55.295 "nvme_io_md": false, 00:13:55.295 "write_zeroes": true, 00:13:55.295 "zcopy": false, 00:13:55.295 "get_zone_info": false, 00:13:55.295 "zone_management": false, 00:13:55.295 "zone_append": false, 00:13:55.295 "compare": false, 00:13:55.295 "compare_and_write": false, 00:13:55.295 "abort": true, 00:13:55.295 "seek_hole": false, 00:13:55.295 "seek_data": false, 00:13:55.295 "copy": false, 00:13:55.295 "nvme_iov_md": false 00:13:55.295 }, 00:13:55.295 "driver_specific": {} 00:13:55.295 } 00:13:55.295 ] 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:55.295 11:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:13:55.295 Running I/O for 60 seconds... 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 82236.36 328945.44 0.00 0.00 333824.00 0.00 0.00 ' 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=82236.36 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 82236 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=82236 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=20000 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 20000 -gt 1000 ']' 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc_0 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 20000 IOPS Malloc_0 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.561 11:25:35 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:00.561 ************************************ 00:14:00.561 START TEST bdev_qos_iops 00:14:00.561 ************************************ 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 20000 IOPS Malloc_0 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=20000 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:00.561 11:25:35 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 19996.99 79987.95 0.00 0.00 81440.00 0.00 0.00 ' 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=19996.99 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 19996 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=19996 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=18000 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=22000 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 19996 -lt 18000 ']' 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 19996 -gt 22000 ']' 00:14:05.828 00:14:05.828 real 0m5.201s 00:14:05.828 user 0m0.106s 00:14:05.828 sys 0m0.028s 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:05.828 ************************************ 00:14:05.828 END TEST bdev_qos_iops 00:14:05.828 ************************************ 00:14:05.828 11:25:40 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:14:05.828 11:25:40 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:14:05.828 11:25:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:14:05.828 11:25:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:05.828 11:25:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:14:05.828 11:25:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:05.828 11:25:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:05.828 11:25:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:14:05.828 11:25:40 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 30602.60 122410.40 0.00 0.00 124928.00 0.00 0.00 ' 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=124928.00 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 124928 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=124928 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=12 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 12 -lt 2 ']' 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.092 11:25:45 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:11.092 ************************************ 00:14:11.092 START TEST bdev_qos_bw 00:14:11.092 ************************************ 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 12 BANDWIDTH Null_1 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=12 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:14:11.092 11:25:45 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 3072.01 12288.02 0.00 0.00 12472.00 0.00 0.00 ' 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=12472.00 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 12472 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=12472 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=12288 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=11059 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=13516 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 12472 -lt 11059 ']' 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 12472 -gt 13516 ']' 00:14:16.359 00:14:16.359 real 0m5.231s 00:14:16.359 user 0m0.112s 00:14:16.359 sys 0m0.022s 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:14:16.359 ************************************ 00:14:16.359 END TEST bdev_qos_bw 00:14:16.359 ************************************ 00:14:16.359 11:25:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:14:16.359 11:25:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:14:16.359 11:25:50 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.359 11:25:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:16.359 11:25:50 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.359 11:25:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:14:16.359 11:25:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:16.359 11:25:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.359 11:25:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:16.359 ************************************ 00:14:16.359 START TEST bdev_qos_ro_bw 00:14:16.359 ************************************ 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:14:16.359 11:25:50 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:14:21.623 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.91 2047.65 0.00 0.00 2068.00 0.00 0.00 ' 00:14:21.623 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2068.00 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2068 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2068 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -lt 1843 ']' 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -gt 2252 ']' 00:14:21.624 00:14:21.624 real 0m5.163s 00:14:21.624 user 0m0.110s 00:14:21.624 sys 0m0.023s 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:21.624 11:25:56 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:14:21.624 ************************************ 00:14:21.624 END TEST bdev_qos_ro_bw 00:14:21.624 ************************************ 00:14:21.624 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:14:21.624 11:25:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:14:21.624 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.624 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:22.189 00:14:22.189 Latency(us) 00:14:22.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.189 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:22.189 Malloc_0 : 26.67 27667.40 108.08 0.00 0.00 9168.27 1846.92 503316.48 00:14:22.189 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:22.189 Null_1 : 26.85 28734.49 112.24 0.00 0.00 8891.56 718.66 176351.42 00:14:22.189 =================================================================================================================== 00:14:22.189 Total : 56401.89 220.32 0.00 0.00 9026.84 718.66 503316.48 00:14:22.189 0 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 118907 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 118907 ']' 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 118907 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 118907 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 118907' 00:14:22.189 killing process with pid 118907 00:14:22.189 Received shutdown signal, test time was about 26.883161 seconds 00:14:22.189 00:14:22.189 Latency(us) 00:14:22.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.189 =================================================================================================================== 00:14:22.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 118907 00:14:22.189 11:25:56 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 118907 00:14:23.564 11:25:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:14:23.564 00:14:23.564 real 0m29.268s 00:14:23.564 user 0m29.973s 00:14:23.564 sys 0m0.601s 00:14:23.564 11:25:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.564 11:25:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:14:23.564 ************************************ 00:14:23.564 END TEST bdev_qos 00:14:23.564 ************************************ 00:14:23.564 11:25:57 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:23.564 11:25:57 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:14:23.564 11:25:57 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:23.564 11:25:57 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.564 11:25:57 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:23.564 ************************************ 00:14:23.564 START TEST bdev_qd_sampling 00:14:23.564 ************************************ 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=119428 00:14:23.564 Process bdev QD sampling period testing pid: 119428 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 119428' 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 119428 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 119428 ']' 00:14:23.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:23.564 11:25:57 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:14:23.564 [2024-07-13 11:25:58.013787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:23.564 [2024-07-13 11:25:58.014189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119428 ] 00:14:23.565 [2024-07-13 11:25:58.176253] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.823 [2024-07-13 11:25:58.399450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.823 [2024-07-13 11:25:58.399449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.389 11:25:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:24.389 11:25:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:14:24.389 11:25:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:14:24.389 11:25:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.389 11:25:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:24.389 Malloc_QD 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:24.389 [ 00:14:24.389 { 00:14:24.389 "name": "Malloc_QD", 00:14:24.389 "aliases": [ 00:14:24.389 "6a9e6d02-3e70-4552-bcbf-6486dc235103" 00:14:24.389 ], 00:14:24.389 "product_name": "Malloc disk", 00:14:24.389 "block_size": 512, 00:14:24.389 "num_blocks": 262144, 00:14:24.389 "uuid": "6a9e6d02-3e70-4552-bcbf-6486dc235103", 00:14:24.389 "assigned_rate_limits": { 00:14:24.389 "rw_ios_per_sec": 0, 00:14:24.389 "rw_mbytes_per_sec": 0, 00:14:24.389 "r_mbytes_per_sec": 0, 00:14:24.389 "w_mbytes_per_sec": 0 00:14:24.389 }, 00:14:24.389 "claimed": false, 00:14:24.389 "zoned": false, 00:14:24.389 "supported_io_types": { 00:14:24.389 "read": true, 00:14:24.389 "write": true, 00:14:24.389 "unmap": true, 00:14:24.389 "flush": true, 00:14:24.389 "reset": true, 00:14:24.389 "nvme_admin": false, 00:14:24.389 "nvme_io": false, 00:14:24.389 "nvme_io_md": false, 00:14:24.389 "write_zeroes": true, 00:14:24.389 "zcopy": true, 00:14:24.389 "get_zone_info": false, 00:14:24.389 "zone_management": false, 00:14:24.389 "zone_append": false, 00:14:24.389 "compare": false, 00:14:24.389 "compare_and_write": false, 00:14:24.389 "abort": true, 00:14:24.389 "seek_hole": false, 00:14:24.389 "seek_data": false, 00:14:24.389 "copy": true, 00:14:24.389 "nvme_iov_md": false 00:14:24.389 }, 00:14:24.389 "memory_domains": [ 00:14:24.389 { 00:14:24.389 "dma_device_id": "system", 00:14:24.389 "dma_device_type": 1 00:14:24.389 }, 00:14:24.389 { 00:14:24.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.389 "dma_device_type": 2 00:14:24.389 } 00:14:24.389 ], 00:14:24.389 "driver_specific": {} 00:14:24.389 } 00:14:24.389 ] 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:14:24.389 11:25:59 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:24.646 Running I/O for 5 seconds... 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:14:26.548 "tick_rate": 2200000000, 00:14:26.548 "ticks": 1783304841456, 00:14:26.548 "bdevs": [ 00:14:26.548 { 00:14:26.548 "name": "Malloc_QD", 00:14:26.548 "bytes_read": 654348800, 00:14:26.548 "num_read_ops": 159747, 00:14:26.548 "bytes_written": 0, 00:14:26.548 "num_write_ops": 0, 00:14:26.548 "bytes_unmapped": 0, 00:14:26.548 "num_unmap_ops": 0, 00:14:26.548 "bytes_copied": 0, 00:14:26.548 "num_copy_ops": 0, 00:14:26.548 "read_latency_ticks": 2189334610457, 00:14:26.548 "max_read_latency_ticks": 29856160, 00:14:26.548 "min_read_latency_ticks": 278812, 00:14:26.548 "write_latency_ticks": 0, 00:14:26.548 "max_write_latency_ticks": 0, 00:14:26.548 "min_write_latency_ticks": 0, 00:14:26.548 "unmap_latency_ticks": 0, 00:14:26.548 "max_unmap_latency_ticks": 0, 00:14:26.548 "min_unmap_latency_ticks": 0, 00:14:26.548 "copy_latency_ticks": 0, 00:14:26.548 "max_copy_latency_ticks": 0, 00:14:26.548 "min_copy_latency_ticks": 0, 00:14:26.548 "io_error": {}, 00:14:26.548 "queue_depth_polling_period": 10, 00:14:26.548 "queue_depth": 512, 00:14:26.548 "io_time": 20, 00:14:26.548 "weighted_io_time": 10240 00:14:26.548 } 00:14:26.548 ] 00:14:26.548 }' 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.548 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:26.548 00:14:26.548 Latency(us) 00:14:26.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.548 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:26.548 Malloc_QD : 2.04 40102.36 156.65 0.00 0.00 6368.08 1489.45 13583.83 00:14:26.548 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:26.548 Malloc_QD : 2.04 41299.17 161.32 0.00 0.00 6183.52 1139.43 9115.46 00:14:26.548 =================================================================================================================== 00:14:26.548 Total : 81401.53 317.97 0.00 0.00 6274.37 1139.43 13583.83 00:14:26.807 0 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 119428 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 119428 ']' 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 119428 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119428 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:26.807 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119428' 00:14:26.807 killing process with pid 119428 00:14:26.807 Received shutdown signal, test time was about 2.172237 seconds 00:14:26.807 00:14:26.807 Latency(us) 00:14:26.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.808 =================================================================================================================== 00:14:26.808 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.808 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 119428 00:14:26.808 11:26:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 119428 00:14:28.182 11:26:02 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:14:28.182 ************************************ 00:14:28.182 END TEST bdev_qd_sampling 00:14:28.182 ************************************ 00:14:28.182 00:14:28.182 real 0m4.600s 00:14:28.182 user 0m8.587s 00:14:28.182 sys 0m0.372s 00:14:28.182 11:26:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:28.182 11:26:02 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:14:28.182 11:26:02 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:28.182 11:26:02 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:14:28.182 11:26:02 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:28.182 11:26:02 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.182 11:26:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:28.182 ************************************ 00:14:28.182 START TEST bdev_error 00:14:28.182 ************************************ 00:14:28.182 11:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:14:28.182 11:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:14:28.182 11:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:14:28.182 11:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:14:28.182 11:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=119522 00:14:28.182 11:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 119522' 00:14:28.182 Process error testing pid: 119522 00:14:28.182 11:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 119522 00:14:28.182 11:26:02 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:14:28.182 11:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 119522 ']' 00:14:28.182 11:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.182 11:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:28.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.182 11:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.182 11:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:28.182 11:26:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:28.182 [2024-07-13 11:26:02.685484] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:28.182 [2024-07-13 11:26:02.686372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119522 ] 00:14:28.182 [2024-07-13 11:26:02.846004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.439 [2024-07-13 11:26:03.012137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.002 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.002 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:14:29.002 11:26:03 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:29.002 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.002 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:29.258 Dev_1 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.258 11:26:03 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.258 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:29.258 [ 00:14:29.258 { 00:14:29.258 "name": "Dev_1", 00:14:29.258 "aliases": [ 00:14:29.258 "17f51252-acf9-491c-b8cc-3538f98b474e" 00:14:29.258 ], 00:14:29.258 "product_name": "Malloc disk", 00:14:29.258 "block_size": 512, 00:14:29.258 "num_blocks": 262144, 00:14:29.259 "uuid": "17f51252-acf9-491c-b8cc-3538f98b474e", 00:14:29.259 "assigned_rate_limits": { 00:14:29.259 "rw_ios_per_sec": 0, 00:14:29.259 "rw_mbytes_per_sec": 0, 00:14:29.259 "r_mbytes_per_sec": 0, 00:14:29.259 "w_mbytes_per_sec": 0 00:14:29.259 }, 00:14:29.259 "claimed": false, 00:14:29.259 "zoned": false, 00:14:29.259 "supported_io_types": { 00:14:29.259 "read": true, 00:14:29.259 "write": true, 00:14:29.259 "unmap": true, 00:14:29.259 "flush": true, 00:14:29.259 "reset": true, 00:14:29.259 "nvme_admin": false, 00:14:29.259 "nvme_io": false, 00:14:29.259 "nvme_io_md": false, 00:14:29.259 "write_zeroes": true, 00:14:29.259 "zcopy": true, 00:14:29.259 "get_zone_info": false, 00:14:29.259 "zone_management": false, 00:14:29.259 "zone_append": false, 00:14:29.259 "compare": false, 00:14:29.259 "compare_and_write": false, 00:14:29.259 "abort": true, 00:14:29.259 "seek_hole": false, 00:14:29.259 "seek_data": false, 00:14:29.259 "copy": true, 00:14:29.259 "nvme_iov_md": false 00:14:29.259 }, 00:14:29.259 "memory_domains": [ 00:14:29.259 { 00:14:29.259 "dma_device_id": "system", 00:14:29.259 "dma_device_type": 1 00:14:29.259 }, 00:14:29.259 { 00:14:29.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.259 "dma_device_type": 2 00:14:29.259 } 00:14:29.259 ], 00:14:29.259 "driver_specific": {} 00:14:29.259 } 00:14:29.259 ] 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:14:29.259 11:26:03 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:29.259 true 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.259 11:26:03 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:29.259 Dev_2 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.259 11:26:03 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:29.259 [ 00:14:29.259 { 00:14:29.259 "name": "Dev_2", 00:14:29.259 "aliases": [ 00:14:29.259 "4f611d0e-d212-4eb3-8252-7b81eee1c351" 00:14:29.259 ], 00:14:29.259 "product_name": "Malloc disk", 00:14:29.259 "block_size": 512, 00:14:29.259 "num_blocks": 262144, 00:14:29.259 "uuid": "4f611d0e-d212-4eb3-8252-7b81eee1c351", 00:14:29.259 "assigned_rate_limits": { 00:14:29.259 "rw_ios_per_sec": 0, 00:14:29.259 "rw_mbytes_per_sec": 0, 00:14:29.259 "r_mbytes_per_sec": 0, 00:14:29.259 "w_mbytes_per_sec": 0 00:14:29.259 }, 00:14:29.259 "claimed": false, 00:14:29.259 "zoned": false, 00:14:29.259 "supported_io_types": { 00:14:29.259 "read": true, 00:14:29.259 "write": true, 00:14:29.259 "unmap": true, 00:14:29.259 "flush": true, 00:14:29.259 "reset": true, 00:14:29.259 "nvme_admin": false, 00:14:29.259 "nvme_io": false, 00:14:29.259 "nvme_io_md": false, 00:14:29.259 "write_zeroes": true, 00:14:29.259 "zcopy": true, 00:14:29.259 "get_zone_info": false, 00:14:29.259 "zone_management": false, 00:14:29.259 "zone_append": false, 00:14:29.259 "compare": false, 00:14:29.259 "compare_and_write": false, 00:14:29.259 "abort": true, 00:14:29.259 "seek_hole": false, 00:14:29.259 "seek_data": false, 00:14:29.259 "copy": true, 00:14:29.259 "nvme_iov_md": false 00:14:29.259 }, 00:14:29.259 "memory_domains": [ 00:14:29.259 { 00:14:29.259 "dma_device_id": "system", 00:14:29.259 "dma_device_type": 1 00:14:29.259 }, 00:14:29.259 { 00:14:29.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.259 "dma_device_type": 2 00:14:29.259 } 00:14:29.259 ], 00:14:29.259 "driver_specific": {} 00:14:29.259 } 00:14:29.259 ] 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:14:29.259 11:26:03 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:29.259 11:26:03 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.259 11:26:03 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:14:29.259 11:26:03 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:29.259 Running I/O for 5 seconds... 00:14:30.192 11:26:04 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 119522 00:14:30.192 Process is existed as continue on error is set. Pid: 119522 00:14:30.192 11:26:04 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 119522' 00:14:30.192 11:26:04 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:14:30.192 11:26:04 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.192 11:26:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:30.192 11:26:04 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.192 11:26:04 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:14:30.192 11:26:04 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.192 11:26:04 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:30.449 Timeout while waiting for response: 00:14:30.449 00:14:30.449 00:14:30.449 11:26:05 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.449 11:26:05 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:14:34.633 00:14:34.633 Latency(us) 00:14:34.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.633 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:34.633 EE_Dev_1 : 0.93 46034.03 179.82 5.38 0.00 344.86 131.26 778.24 00:14:34.633 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:34.633 Dev_2 : 5.00 95434.45 372.79 0.00 0.00 165.05 77.73 255471.24 00:14:34.633 =================================================================================================================== 00:14:34.633 Total : 141468.48 552.61 5.38 0.00 179.85 77.73 255471.24 00:14:35.567 11:26:10 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 119522 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 119522 ']' 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 119522 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119522 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119522' 00:14:35.567 killing process with pid 119522 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 119522 00:14:35.567 11:26:10 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 119522 00:14:35.567 Received shutdown signal, test time was about 5.000000 seconds 00:14:35.567 00:14:35.567 Latency(us) 00:14:35.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.567 =================================================================================================================== 00:14:35.567 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:36.945 11:26:11 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=119650 00:14:36.945 Process error testing pid: 119650 00:14:36.945 11:26:11 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 119650' 00:14:36.945 11:26:11 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 119650 00:14:36.945 11:26:11 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 119650 ']' 00:14:36.945 11:26:11 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.945 11:26:11 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:36.945 11:26:11 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.945 11:26:11 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:36.945 11:26:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:36.945 11:26:11 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:14:36.945 [2024-07-13 11:26:11.436501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:36.945 [2024-07-13 11:26:11.436915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119650 ] 00:14:36.945 [2024-07-13 11:26:11.588396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.203 [2024-07-13 11:26:11.753609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:14:37.770 11:26:12 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:37.770 Dev_1 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.770 11:26:12 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:37.770 [ 00:14:37.770 { 00:14:37.770 "name": "Dev_1", 00:14:37.770 "aliases": [ 00:14:37.770 "0481a5d7-c002-43f1-b295-99539821f18b" 00:14:37.770 ], 00:14:37.770 "product_name": "Malloc disk", 00:14:37.770 "block_size": 512, 00:14:37.770 "num_blocks": 262144, 00:14:37.770 "uuid": "0481a5d7-c002-43f1-b295-99539821f18b", 00:14:37.770 "assigned_rate_limits": { 00:14:37.770 "rw_ios_per_sec": 0, 00:14:37.770 "rw_mbytes_per_sec": 0, 00:14:37.770 "r_mbytes_per_sec": 0, 00:14:37.770 "w_mbytes_per_sec": 0 00:14:37.770 }, 00:14:37.770 "claimed": false, 00:14:37.770 "zoned": false, 00:14:37.770 "supported_io_types": { 00:14:37.770 "read": true, 00:14:37.770 "write": true, 00:14:37.770 "unmap": true, 00:14:37.770 "flush": true, 00:14:37.770 "reset": true, 00:14:37.770 "nvme_admin": false, 00:14:37.770 "nvme_io": false, 00:14:37.770 "nvme_io_md": false, 00:14:37.770 "write_zeroes": true, 00:14:37.770 "zcopy": true, 00:14:37.770 "get_zone_info": false, 00:14:37.770 "zone_management": false, 00:14:37.770 "zone_append": false, 00:14:37.770 "compare": false, 00:14:37.770 "compare_and_write": false, 00:14:37.770 "abort": true, 00:14:37.770 "seek_hole": false, 00:14:37.770 "seek_data": false, 00:14:37.770 "copy": true, 00:14:37.770 "nvme_iov_md": false 00:14:37.770 }, 00:14:37.770 "memory_domains": [ 00:14:37.770 { 00:14:37.770 "dma_device_id": "system", 00:14:37.770 "dma_device_type": 1 00:14:37.770 }, 00:14:37.770 { 00:14:37.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.770 "dma_device_type": 2 00:14:37.770 } 00:14:37.770 ], 00:14:37.770 "driver_specific": {} 00:14:37.770 } 00:14:37.770 ] 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:14:37.770 11:26:12 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.770 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:38.029 true 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.029 11:26:12 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:38.029 Dev_2 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.029 11:26:12 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:38.029 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:38.030 [ 00:14:38.030 { 00:14:38.030 "name": "Dev_2", 00:14:38.030 "aliases": [ 00:14:38.030 "c3a57642-b45a-46b1-9796-f169ef36e692" 00:14:38.030 ], 00:14:38.030 "product_name": "Malloc disk", 00:14:38.030 "block_size": 512, 00:14:38.030 "num_blocks": 262144, 00:14:38.030 "uuid": "c3a57642-b45a-46b1-9796-f169ef36e692", 00:14:38.030 "assigned_rate_limits": { 00:14:38.030 "rw_ios_per_sec": 0, 00:14:38.030 "rw_mbytes_per_sec": 0, 00:14:38.030 "r_mbytes_per_sec": 0, 00:14:38.030 "w_mbytes_per_sec": 0 00:14:38.030 }, 00:14:38.030 "claimed": false, 00:14:38.030 "zoned": false, 00:14:38.030 "supported_io_types": { 00:14:38.030 "read": true, 00:14:38.030 "write": true, 00:14:38.030 "unmap": true, 00:14:38.030 "flush": true, 00:14:38.030 "reset": true, 00:14:38.030 "nvme_admin": false, 00:14:38.030 "nvme_io": false, 00:14:38.030 "nvme_io_md": false, 00:14:38.030 "write_zeroes": true, 00:14:38.030 "zcopy": true, 00:14:38.030 "get_zone_info": false, 00:14:38.030 "zone_management": false, 00:14:38.030 "zone_append": false, 00:14:38.030 "compare": false, 00:14:38.030 "compare_and_write": false, 00:14:38.030 "abort": true, 00:14:38.030 "seek_hole": false, 00:14:38.030 "seek_data": false, 00:14:38.030 "copy": true, 00:14:38.030 "nvme_iov_md": false 00:14:38.030 }, 00:14:38.030 "memory_domains": [ 00:14:38.030 { 00:14:38.030 "dma_device_id": "system", 00:14:38.030 "dma_device_type": 1 00:14:38.030 }, 00:14:38.030 { 00:14:38.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.030 "dma_device_type": 2 00:14:38.030 } 00:14:38.030 ], 00:14:38.030 "driver_specific": {} 00:14:38.030 } 00:14:38.030 ] 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:14:38.030 11:26:12 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.030 11:26:12 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 119650 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 119650 00:14:38.030 11:26:12 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.030 11:26:12 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 119650 00:14:38.030 Running I/O for 5 seconds... 00:14:38.030 task offset: 259952 on job bdev=EE_Dev_1 fails 00:14:38.030 00:14:38.030 Latency(us) 00:14:38.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.030 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:38.030 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:14:38.030 EE_Dev_1 : 0.00 30598.05 119.52 6954.10 0.00 342.49 128.47 610.68 00:14:38.030 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:14:38.030 Dev_2 : 0.00 21828.10 85.27 0.00 0.00 504.58 121.95 927.19 00:14:38.030 =================================================================================================================== 00:14:38.030 Total : 52426.16 204.79 6954.10 0.00 430.40 121.95 927.19 00:14:38.030 [2024-07-13 11:26:12.742878] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:38.030 request: 00:14:38.030 { 00:14:38.030 "method": "perform_tests", 00:14:38.030 "req_id": 1 00:14:38.030 } 00:14:38.030 Got JSON-RPC error response 00:14:38.030 response: 00:14:38.030 { 00:14:38.030 "code": -32603, 00:14:38.030 "message": "bdevperf failed with error Operation not permitted" 00:14:38.030 } 00:14:39.931 11:26:14 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:14:39.931 11:26:14 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:39.931 11:26:14 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:14:39.931 11:26:14 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:14:39.931 11:26:14 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:14:39.931 11:26:14 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:39.931 00:14:39.931 real 0m11.559s 00:14:39.931 user 0m11.687s 00:14:39.931 sys 0m0.816s 00:14:39.931 11:26:14 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:39.931 11:26:14 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:14:39.931 ************************************ 00:14:39.931 END TEST bdev_error 00:14:39.931 ************************************ 00:14:39.931 11:26:14 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:39.931 11:26:14 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:14:39.931 11:26:14 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:39.931 11:26:14 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.931 11:26:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:39.931 ************************************ 00:14:39.931 START TEST bdev_stat 00:14:39.931 ************************************ 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=119708 00:14:39.931 Process Bdev IO statistics testing pid: 119708 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 119708' 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 119708 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 119708 ']' 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.931 11:26:14 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:39.931 [2024-07-13 11:26:14.306920] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:39.931 [2024-07-13 11:26:14.307128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119708 ] 00:14:39.931 [2024-07-13 11:26:14.489081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:40.190 [2024-07-13 11:26:14.733583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.190 [2024-07-13 11:26:14.733591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:40.758 Malloc_STAT 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:40.758 [ 00:14:40.758 { 00:14:40.758 "name": "Malloc_STAT", 00:14:40.758 "aliases": [ 00:14:40.758 "a8d0eb7e-ddd6-483c-bf01-7d781adb1a6a" 00:14:40.758 ], 00:14:40.758 "product_name": "Malloc disk", 00:14:40.758 "block_size": 512, 00:14:40.758 "num_blocks": 262144, 00:14:40.758 "uuid": "a8d0eb7e-ddd6-483c-bf01-7d781adb1a6a", 00:14:40.758 "assigned_rate_limits": { 00:14:40.758 "rw_ios_per_sec": 0, 00:14:40.758 "rw_mbytes_per_sec": 0, 00:14:40.758 "r_mbytes_per_sec": 0, 00:14:40.758 "w_mbytes_per_sec": 0 00:14:40.758 }, 00:14:40.758 "claimed": false, 00:14:40.758 "zoned": false, 00:14:40.758 "supported_io_types": { 00:14:40.758 "read": true, 00:14:40.758 "write": true, 00:14:40.758 "unmap": true, 00:14:40.758 "flush": true, 00:14:40.758 "reset": true, 00:14:40.758 "nvme_admin": false, 00:14:40.758 "nvme_io": false, 00:14:40.758 "nvme_io_md": false, 00:14:40.758 "write_zeroes": true, 00:14:40.758 "zcopy": true, 00:14:40.758 "get_zone_info": false, 00:14:40.758 "zone_management": false, 00:14:40.758 "zone_append": false, 00:14:40.758 "compare": false, 00:14:40.758 "compare_and_write": false, 00:14:40.758 "abort": true, 00:14:40.758 "seek_hole": false, 00:14:40.758 "seek_data": false, 00:14:40.758 "copy": true, 00:14:40.758 "nvme_iov_md": false 00:14:40.758 }, 00:14:40.758 "memory_domains": [ 00:14:40.758 { 00:14:40.758 "dma_device_id": "system", 00:14:40.758 "dma_device_type": 1 00:14:40.758 }, 00:14:40.758 { 00:14:40.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.758 "dma_device_type": 2 00:14:40.758 } 00:14:40.758 ], 00:14:40.758 "driver_specific": {} 00:14:40.758 } 00:14:40.758 ] 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:14:40.758 11:26:15 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:41.017 Running I/O for 10 seconds... 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:14:42.922 "tick_rate": 2200000000, 00:14:42.922 "ticks": 1819074285352, 00:14:42.922 "bdevs": [ 00:14:42.922 { 00:14:42.922 "name": "Malloc_STAT", 00:14:42.922 "bytes_read": 525373952, 00:14:42.922 "num_read_ops": 128259, 00:14:42.922 "bytes_written": 0, 00:14:42.922 "num_write_ops": 0, 00:14:42.922 "bytes_unmapped": 0, 00:14:42.922 "num_unmap_ops": 0, 00:14:42.922 "bytes_copied": 0, 00:14:42.922 "num_copy_ops": 0, 00:14:42.922 "read_latency_ticks": 2135836276738, 00:14:42.922 "max_read_latency_ticks": 19033036, 00:14:42.922 "min_read_latency_ticks": 383176, 00:14:42.922 "write_latency_ticks": 0, 00:14:42.922 "max_write_latency_ticks": 0, 00:14:42.922 "min_write_latency_ticks": 0, 00:14:42.922 "unmap_latency_ticks": 0, 00:14:42.922 "max_unmap_latency_ticks": 0, 00:14:42.922 "min_unmap_latency_ticks": 0, 00:14:42.922 "copy_latency_ticks": 0, 00:14:42.922 "max_copy_latency_ticks": 0, 00:14:42.922 "min_copy_latency_ticks": 0, 00:14:42.922 "io_error": {} 00:14:42.922 } 00:14:42.922 ] 00:14:42.922 }' 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=128259 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:14:42.922 "tick_rate": 2200000000, 00:14:42.922 "ticks": 1819248376984, 00:14:42.922 "name": "Malloc_STAT", 00:14:42.922 "channels": [ 00:14:42.922 { 00:14:42.922 "thread_id": 2, 00:14:42.922 "bytes_read": 270532608, 00:14:42.922 "num_read_ops": 66048, 00:14:42.922 "bytes_written": 0, 00:14:42.922 "num_write_ops": 0, 00:14:42.922 "bytes_unmapped": 0, 00:14:42.922 "num_unmap_ops": 0, 00:14:42.922 "bytes_copied": 0, 00:14:42.922 "num_copy_ops": 0, 00:14:42.922 "read_latency_ticks": 1110381592260, 00:14:42.922 "max_read_latency_ticks": 19876166, 00:14:42.922 "min_read_latency_ticks": 13221266, 00:14:42.922 "write_latency_ticks": 0, 00:14:42.922 "max_write_latency_ticks": 0, 00:14:42.922 "min_write_latency_ticks": 0, 00:14:42.922 "unmap_latency_ticks": 0, 00:14:42.922 "max_unmap_latency_ticks": 0, 00:14:42.922 "min_unmap_latency_ticks": 0, 00:14:42.922 "copy_latency_ticks": 0, 00:14:42.922 "max_copy_latency_ticks": 0, 00:14:42.922 "min_copy_latency_ticks": 0 00:14:42.922 }, 00:14:42.922 { 00:14:42.922 "thread_id": 3, 00:14:42.922 "bytes_read": 276824064, 00:14:42.922 "num_read_ops": 67584, 00:14:42.922 "bytes_written": 0, 00:14:42.922 "num_write_ops": 0, 00:14:42.922 "bytes_unmapped": 0, 00:14:42.922 "num_unmap_ops": 0, 00:14:42.922 "bytes_copied": 0, 00:14:42.922 "num_copy_ops": 0, 00:14:42.922 "read_latency_ticks": 1114356901462, 00:14:42.922 "max_read_latency_ticks": 19033036, 00:14:42.922 "min_read_latency_ticks": 11617998, 00:14:42.922 "write_latency_ticks": 0, 00:14:42.922 "max_write_latency_ticks": 0, 00:14:42.922 "min_write_latency_ticks": 0, 00:14:42.922 "unmap_latency_ticks": 0, 00:14:42.922 "max_unmap_latency_ticks": 0, 00:14:42.922 "min_unmap_latency_ticks": 0, 00:14:42.922 "copy_latency_ticks": 0, 00:14:42.922 "max_copy_latency_ticks": 0, 00:14:42.922 "min_copy_latency_ticks": 0 00:14:42.922 } 00:14:42.922 ] 00:14:42.922 }' 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=66048 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=66048 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=67584 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=133632 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:14:42.922 "tick_rate": 2200000000, 00:14:42.922 "ticks": 1819542520084, 00:14:42.922 "bdevs": [ 00:14:42.922 { 00:14:42.922 "name": "Malloc_STAT", 00:14:42.922 "bytes_read": 584094208, 00:14:42.922 "num_read_ops": 142595, 00:14:42.922 "bytes_written": 0, 00:14:42.922 "num_write_ops": 0, 00:14:42.922 "bytes_unmapped": 0, 00:14:42.922 "num_unmap_ops": 0, 00:14:42.922 "bytes_copied": 0, 00:14:42.922 "num_copy_ops": 0, 00:14:42.922 "read_latency_ticks": 2375126951826, 00:14:42.922 "max_read_latency_ticks": 20313284, 00:14:42.922 "min_read_latency_ticks": 383176, 00:14:42.922 "write_latency_ticks": 0, 00:14:42.922 "max_write_latency_ticks": 0, 00:14:42.922 "min_write_latency_ticks": 0, 00:14:42.922 "unmap_latency_ticks": 0, 00:14:42.922 "max_unmap_latency_ticks": 0, 00:14:42.922 "min_unmap_latency_ticks": 0, 00:14:42.922 "copy_latency_ticks": 0, 00:14:42.922 "max_copy_latency_ticks": 0, 00:14:42.922 "min_copy_latency_ticks": 0, 00:14:42.922 "io_error": {} 00:14:42.922 } 00:14:42.922 ] 00:14:42.922 }' 00:14:42.922 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:14:43.181 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=142595 00:14:43.181 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 133632 -lt 128259 ']' 00:14:43.181 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 133632 -gt 142595 ']' 00:14:43.181 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:14:43.181 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.181 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:43.181 00:14:43.181 Latency(us) 00:14:43.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.181 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:14:43.181 Malloc_STAT : 2.20 33298.12 130.07 0.00 0.00 7665.18 1489.45 9353.77 00:14:43.181 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:14:43.182 Malloc_STAT : 2.20 34197.46 133.58 0.00 0.00 7464.61 830.37 8698.41 00:14:43.182 =================================================================================================================== 00:14:43.182 Total : 67495.58 263.65 0.00 0.00 7563.51 830.37 9353.77 00:14:43.182 0 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 119708 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 119708 ']' 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 119708 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119708 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119708' 00:14:43.182 killing process with pid 119708 00:14:43.182 Received shutdown signal, test time was about 2.328479 seconds 00:14:43.182 00:14:43.182 Latency(us) 00:14:43.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.182 =================================================================================================================== 00:14:43.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 119708 00:14:43.182 11:26:17 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 119708 00:14:44.561 11:26:19 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:14:44.561 ************************************ 00:14:44.561 END TEST bdev_stat 00:14:44.561 ************************************ 00:14:44.561 00:14:44.561 real 0m4.792s 00:14:44.561 user 0m8.976s 00:14:44.561 sys 0m0.484s 00:14:44.561 11:26:19 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.561 11:26:19 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:14:44.561 11:26:19 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:14:44.561 11:26:19 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:14:44.561 00:14:44.561 real 2m21.905s 00:14:44.561 user 5m48.544s 00:14:44.561 sys 0m22.210s 00:14:44.561 11:26:19 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:44.561 11:26:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:14:44.561 ************************************ 00:14:44.561 END TEST blockdev_general 00:14:44.561 ************************************ 00:14:44.561 11:26:19 -- common/autotest_common.sh@1142 -- # return 0 00:14:44.561 11:26:19 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:44.561 11:26:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:44.561 11:26:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.561 11:26:19 -- common/autotest_common.sh@10 -- # set +x 00:14:44.561 ************************************ 00:14:44.561 START TEST bdev_raid 00:14:44.561 ************************************ 00:14:44.561 11:26:19 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:14:44.561 * Looking for test storage... 00:14:44.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:44.561 11:26:19 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:14:44.561 11:26:19 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:14:44.561 11:26:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:44.561 11:26:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.561 11:26:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.561 ************************************ 00:14:44.561 START TEST raid_function_test_raid0 00:14:44.561 ************************************ 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=119885 00:14:44.561 Process raid pid: 119885 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 119885' 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 119885 /var/tmp/spdk-raid.sock 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 119885 ']' 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:44.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:44.561 11:26:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:44.561 [2024-07-13 11:26:19.298889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:44.561 [2024-07-13 11:26:19.299239] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.820 [2024-07-13 11:26:19.452798] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.079 [2024-07-13 11:26:19.638195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.338 [2024-07-13 11:26:19.826946] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.597 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:45.597 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:14:45.597 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:14:45.597 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:14:45.597 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:45.597 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:14:45.597 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:45.856 [2024-07-13 11:26:20.519971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:45.856 [2024-07-13 11:26:20.521870] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:45.856 [2024-07-13 11:26:20.521948] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:14:45.856 [2024-07-13 11:26:20.521961] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:45.856 [2024-07-13 11:26:20.522074] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:45.856 [2024-07-13 11:26:20.522470] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:14:45.856 [2024-07-13 11:26:20.522493] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007580 00:14:45.856 [2024-07-13 11:26:20.522661] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.856 Base_1 00:14:45.856 Base_2 00:14:45.856 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:45.856 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:45.856 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.115 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:46.374 [2024-07-13 11:26:20.948032] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:46.374 /dev/nbd0 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:46.374 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:46.375 1+0 records in 00:14:46.375 1+0 records out 00:14:46.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455764 s, 9.0 MB/s 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:46.375 11:26:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:46.634 { 00:14:46.634 "nbd_device": "/dev/nbd0", 00:14:46.634 "bdev_name": "raid" 00:14:46.634 } 00:14:46.634 ]' 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:46.634 { 00:14:46.634 "nbd_device": "/dev/nbd0", 00:14:46.634 "bdev_name": "raid" 00:14:46.634 } 00:14:46.634 ]' 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=(0 1028 321) 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=(128 2035 456) 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:46.634 4096+0 records in 00:14:46.634 4096+0 records out 00:14:46.634 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.025304 s, 82.9 MB/s 00:14:46.634 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:46.894 4096+0 records in 00:14:46.894 4096+0 records out 00:14:46.894 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.256979 s, 8.2 MB/s 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:46.894 128+0 records in 00:14:46.894 128+0 records out 00:14:46.894 65536 bytes (66 kB, 64 KiB) copied, 0.000634372 s, 103 MB/s 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:46.894 2035+0 records in 00:14:46.894 2035+0 records out 00:14:46.894 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00349727 s, 298 MB/s 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:46.894 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:47.153 456+0 records in 00:14:47.153 456+0 records out 00:14:47.153 233472 bytes (233 kB, 228 KiB) copied, 0.000795865 s, 293 MB/s 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:47.153 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:14:47.153 [2024-07-13 11:26:21.851900] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.412 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:14:47.412 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:47.412 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:47.412 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:14:47.412 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:14:47.412 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:47.412 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:47.412 11:26:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:47.412 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:47.412 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:47.412 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 119885 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 119885 ']' 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 119885 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 119885 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:47.671 killing process with pid 119885 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 119885' 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 119885 00:14:47.671 11:26:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 119885 00:14:47.671 [2024-07-13 11:26:22.227106] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:47.671 [2024-07-13 11:26:22.227205] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:47.671 [2024-07-13 11:26:22.227247] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:47.671 [2024-07-13 11:26:22.227257] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid, state offline 00:14:47.671 [2024-07-13 11:26:22.360555] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.047 11:26:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:14:49.047 00:14:49.047 real 0m4.134s 00:14:49.047 user 0m5.141s 00:14:49.047 sys 0m0.897s 00:14:49.047 11:26:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.047 ************************************ 00:14:49.047 END TEST raid_function_test_raid0 00:14:49.047 ************************************ 00:14:49.047 11:26:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:49.047 11:26:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:49.047 11:26:23 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:14:49.047 11:26:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:49.047 11:26:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.047 11:26:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.047 ************************************ 00:14:49.047 START TEST raid_function_test_concat 00:14:49.047 ************************************ 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=120035 00:14:49.047 Process raid pid: 120035 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 120035' 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 120035 /var/tmp/spdk-raid.sock 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 120035 ']' 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.047 11:26:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:49.047 [2024-07-13 11:26:23.497868] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:49.047 [2024-07-13 11:26:23.498043] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.047 [2024-07-13 11:26:23.653551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.305 [2024-07-13 11:26:23.841480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.305 [2024-07-13 11:26:24.029384] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:49.872 11:26:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.872 11:26:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:14:49.872 11:26:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:14:49.872 11:26:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:14:49.872 11:26:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:49.872 11:26:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:14:49.872 11:26:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:50.130 [2024-07-13 11:26:24.775154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:50.130 [2024-07-13 11:26:24.777040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:50.130 [2024-07-13 11:26:24.777119] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:14:50.130 [2024-07-13 11:26:24.777132] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:50.130 [2024-07-13 11:26:24.777247] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:50.130 [2024-07-13 11:26:24.777580] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:14:50.130 [2024-07-13 11:26:24.777602] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007580 00:14:50.130 [2024-07-13 11:26:24.777748] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.130 Base_1 00:14:50.130 Base_2 00:14:50.130 11:26:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:14:50.130 11:26:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:50.130 11:26:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.388 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:50.646 [2024-07-13 11:26:25.323272] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:50.646 /dev/nbd0 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.646 1+0 records in 00:14:50.646 1+0 records out 00:14:50.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396056 s, 10.3 MB/s 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:50.646 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:50.902 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:50.902 { 00:14:50.902 "nbd_device": "/dev/nbd0", 00:14:50.902 "bdev_name": "raid" 00:14:50.902 } 00:14:50.902 ]' 00:14:50.902 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:50.902 { 00:14:50.902 "nbd_device": "/dev/nbd0", 00:14:50.902 "bdev_name": "raid" 00:14:50.902 } 00:14:50.902 ]' 00:14:50.902 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:51.158 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:51.158 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=(0 1028 321) 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=(128 2035 456) 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:51.159 4096+0 records in 00:14:51.159 4096+0 records out 00:14:51.159 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0213834 s, 98.1 MB/s 00:14:51.159 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:51.416 4096+0 records in 00:14:51.416 4096+0 records out 00:14:51.416 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.251745 s, 8.3 MB/s 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:51.416 128+0 records in 00:14:51.416 128+0 records out 00:14:51.416 65536 bytes (66 kB, 64 KiB) copied, 0.000641048 s, 102 MB/s 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:51.416 11:26:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:51.416 2035+0 records in 00:14:51.416 2035+0 records out 00:14:51.416 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0049627 s, 210 MB/s 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:51.416 456+0 records in 00:14:51.416 456+0 records out 00:14:51.416 233472 bytes (233 kB, 228 KiB) copied, 0.00132352 s, 176 MB/s 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.416 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:51.673 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:51.673 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:51.673 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:51.673 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.673 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:14:51.674 [2024-07-13 11:26:26.236792] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:51.674 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 120035 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 120035 ']' 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 120035 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120035 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120035' 00:14:51.931 killing process with pid 120035 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 120035 00:14:51.931 11:26:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 120035 00:14:51.931 [2024-07-13 11:26:26.668756] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.931 [2024-07-13 11:26:26.668836] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.931 [2024-07-13 11:26:26.668878] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.931 [2024-07-13 11:26:26.668888] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name raid, state offline 00:14:52.189 [2024-07-13 11:26:26.801709] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:53.124 11:26:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:14:53.124 00:14:53.124 real 0m4.381s 00:14:53.124 user 0m5.693s 00:14:53.124 sys 0m0.863s 00:14:53.124 ************************************ 00:14:53.124 END TEST raid_function_test_concat 00:14:53.124 ************************************ 00:14:53.124 11:26:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.124 11:26:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:53.124 11:26:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:53.124 11:26:27 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:14:53.124 11:26:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:53.124 11:26:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:53.124 11:26:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:53.381 ************************************ 00:14:53.381 START TEST raid0_resize_test 00:14:53.381 ************************************ 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=120190 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:53.381 Process raid pid: 120190 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 120190' 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 120190 /var/tmp/spdk-raid.sock 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 120190 ']' 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:53.381 11:26:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:53.382 11:26:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:53.382 11:26:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.382 11:26:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.382 [2024-07-13 11:26:27.947524] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:53.382 [2024-07-13 11:26:27.947728] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.382 [2024-07-13 11:26:28.116610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.640 [2024-07-13 11:26:28.299087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.898 [2024-07-13 11:26:28.491623] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:54.156 11:26:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.156 11:26:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:14:54.156 11:26:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:54.414 Base_1 00:14:54.414 11:26:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:54.672 Base_2 00:14:54.672 11:26:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:54.942 [2024-07-13 11:26:29.494699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:54.942 [2024-07-13 11:26:29.496217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:54.942 [2024-07-13 11:26:29.496280] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:14:54.942 [2024-07-13 11:26:29.496293] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:54.942 [2024-07-13 11:26:29.496409] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:54.942 [2024-07-13 11:26:29.496669] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:14:54.942 [2024-07-13 11:26:29.496690] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007580 00:14:54.942 [2024-07-13 11:26:29.496823] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.942 11:26:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:55.226 [2024-07-13 11:26:29.774756] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:55.226 [2024-07-13 11:26:29.774781] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:55.226 true 00:14:55.226 11:26:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:55.226 11:26:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:14:55.494 [2024-07-13 11:26:29.966955] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.494 11:26:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:14:55.494 11:26:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:14:55.494 11:26:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:14:55.494 11:26:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:55.494 [2024-07-13 11:26:30.162793] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:55.494 [2024-07-13 11:26:30.162817] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:55.494 [2024-07-13 11:26:30.162869] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:14:55.494 true 00:14:55.494 11:26:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:55.494 11:26:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:14:55.753 [2024-07-13 11:26:30.415066] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 120190 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 120190 ']' 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 120190 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120190 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:55.753 killing process with pid 120190 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120190' 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 120190 00:14:55.753 11:26:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 120190 00:14:55.753 [2024-07-13 11:26:30.440933] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.753 [2024-07-13 11:26:30.441000] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.753 [2024-07-13 11:26:30.441060] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.753 [2024-07-13 11:26:30.441073] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Raid, state offline 00:14:55.753 [2024-07-13 11:26:30.441611] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.130 11:26:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:14:57.130 00:14:57.130 real 0m3.578s 00:14:57.130 user 0m5.057s 00:14:57.130 sys 0m0.537s 00:14:57.130 11:26:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.130 11:26:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.130 ************************************ 00:14:57.130 END TEST raid0_resize_test 00:14:57.130 ************************************ 00:14:57.130 11:26:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:57.130 11:26:31 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:14:57.130 11:26:31 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:57.130 11:26:31 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:57.130 11:26:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:57.130 11:26:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.130 11:26:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.130 ************************************ 00:14:57.130 START TEST raid_state_function_test 00:14:57.130 ************************************ 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:57.130 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=120298 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 120298' 00:14:57.131 Process raid pid: 120298 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 120298 /var/tmp/spdk-raid.sock 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 120298 ']' 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.131 11:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.131 [2024-07-13 11:26:31.580018] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:57.131 [2024-07-13 11:26:31.580205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.131 [2024-07-13 11:26:31.729165] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.390 [2024-07-13 11:26:31.922152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.390 [2024-07-13 11:26:32.110081] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:57.958 [2024-07-13 11:26:32.573822] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.958 [2024-07-13 11:26:32.573911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.958 [2024-07-13 11:26:32.573927] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:57.958 [2024-07-13 11:26:32.573955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.958 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.217 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:58.217 "name": "Existed_Raid", 00:14:58.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.217 "strip_size_kb": 64, 00:14:58.217 "state": "configuring", 00:14:58.217 "raid_level": "raid0", 00:14:58.217 "superblock": false, 00:14:58.217 "num_base_bdevs": 2, 00:14:58.217 "num_base_bdevs_discovered": 0, 00:14:58.217 "num_base_bdevs_operational": 2, 00:14:58.217 "base_bdevs_list": [ 00:14:58.217 { 00:14:58.217 "name": "BaseBdev1", 00:14:58.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.217 "is_configured": false, 00:14:58.217 "data_offset": 0, 00:14:58.217 "data_size": 0 00:14:58.217 }, 00:14:58.217 { 00:14:58.217 "name": "BaseBdev2", 00:14:58.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.217 "is_configured": false, 00:14:58.217 "data_offset": 0, 00:14:58.217 "data_size": 0 00:14:58.217 } 00:14:58.217 ] 00:14:58.217 }' 00:14:58.217 11:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:58.217 11:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.784 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:59.042 [2024-07-13 11:26:33.629909] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:59.042 [2024-07-13 11:26:33.629939] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:59.042 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:59.300 [2024-07-13 11:26:33.825933] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.300 [2024-07-13 11:26:33.825983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.300 [2024-07-13 11:26:33.825994] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.300 [2024-07-13 11:26:33.826021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.300 11:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:59.300 [2024-07-13 11:26:34.043239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.300 BaseBdev1 00:14:59.558 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:59.558 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:59.558 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:59.558 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:59.558 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:59.558 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:59.558 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:59.558 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:59.817 [ 00:14:59.817 { 00:14:59.817 "name": "BaseBdev1", 00:14:59.817 "aliases": [ 00:14:59.817 "508c108e-fa74-40cd-8a75-9192f2326dc0" 00:14:59.817 ], 00:14:59.817 "product_name": "Malloc disk", 00:14:59.817 "block_size": 512, 00:14:59.817 "num_blocks": 65536, 00:14:59.817 "uuid": "508c108e-fa74-40cd-8a75-9192f2326dc0", 00:14:59.817 "assigned_rate_limits": { 00:14:59.817 "rw_ios_per_sec": 0, 00:14:59.817 "rw_mbytes_per_sec": 0, 00:14:59.817 "r_mbytes_per_sec": 0, 00:14:59.817 "w_mbytes_per_sec": 0 00:14:59.817 }, 00:14:59.817 "claimed": true, 00:14:59.817 "claim_type": "exclusive_write", 00:14:59.817 "zoned": false, 00:14:59.817 "supported_io_types": { 00:14:59.817 "read": true, 00:14:59.817 "write": true, 00:14:59.817 "unmap": true, 00:14:59.817 "flush": true, 00:14:59.817 "reset": true, 00:14:59.817 "nvme_admin": false, 00:14:59.817 "nvme_io": false, 00:14:59.817 "nvme_io_md": false, 00:14:59.817 "write_zeroes": true, 00:14:59.817 "zcopy": true, 00:14:59.817 "get_zone_info": false, 00:14:59.817 "zone_management": false, 00:14:59.817 "zone_append": false, 00:14:59.817 "compare": false, 00:14:59.817 "compare_and_write": false, 00:14:59.817 "abort": true, 00:14:59.817 "seek_hole": false, 00:14:59.817 "seek_data": false, 00:14:59.817 "copy": true, 00:14:59.817 "nvme_iov_md": false 00:14:59.817 }, 00:14:59.817 "memory_domains": [ 00:14:59.817 { 00:14:59.817 "dma_device_id": "system", 00:14:59.817 "dma_device_type": 1 00:14:59.817 }, 00:14:59.817 { 00:14:59.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.817 "dma_device_type": 2 00:14:59.817 } 00:14:59.817 ], 00:14:59.817 "driver_specific": {} 00:14:59.817 } 00:14:59.817 ] 00:14:59.817 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:59.817 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:59.817 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.817 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:59.817 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:59.817 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.817 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:59.817 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.817 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.818 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.818 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.818 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.818 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.075 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.075 "name": "Existed_Raid", 00:15:00.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.075 "strip_size_kb": 64, 00:15:00.075 "state": "configuring", 00:15:00.075 "raid_level": "raid0", 00:15:00.075 "superblock": false, 00:15:00.075 "num_base_bdevs": 2, 00:15:00.075 "num_base_bdevs_discovered": 1, 00:15:00.075 "num_base_bdevs_operational": 2, 00:15:00.075 "base_bdevs_list": [ 00:15:00.075 { 00:15:00.075 "name": "BaseBdev1", 00:15:00.075 "uuid": "508c108e-fa74-40cd-8a75-9192f2326dc0", 00:15:00.075 "is_configured": true, 00:15:00.075 "data_offset": 0, 00:15:00.075 "data_size": 65536 00:15:00.075 }, 00:15:00.075 { 00:15:00.075 "name": "BaseBdev2", 00:15:00.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.075 "is_configured": false, 00:15:00.075 "data_offset": 0, 00:15:00.075 "data_size": 0 00:15:00.075 } 00:15:00.075 ] 00:15:00.075 }' 00:15:00.075 11:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.075 11:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.641 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:00.900 [2024-07-13 11:26:35.487565] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.900 [2024-07-13 11:26:35.487602] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:15:00.900 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:01.158 [2024-07-13 11:26:35.751635] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.158 [2024-07-13 11:26:35.753490] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.158 [2024-07-13 11:26:35.753548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.158 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:01.158 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.159 11:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.417 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.417 "name": "Existed_Raid", 00:15:01.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.417 "strip_size_kb": 64, 00:15:01.417 "state": "configuring", 00:15:01.417 "raid_level": "raid0", 00:15:01.417 "superblock": false, 00:15:01.417 "num_base_bdevs": 2, 00:15:01.417 "num_base_bdevs_discovered": 1, 00:15:01.417 "num_base_bdevs_operational": 2, 00:15:01.417 "base_bdevs_list": [ 00:15:01.417 { 00:15:01.417 "name": "BaseBdev1", 00:15:01.417 "uuid": "508c108e-fa74-40cd-8a75-9192f2326dc0", 00:15:01.417 "is_configured": true, 00:15:01.417 "data_offset": 0, 00:15:01.417 "data_size": 65536 00:15:01.417 }, 00:15:01.417 { 00:15:01.417 "name": "BaseBdev2", 00:15:01.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.417 "is_configured": false, 00:15:01.417 "data_offset": 0, 00:15:01.417 "data_size": 0 00:15:01.417 } 00:15:01.417 ] 00:15:01.417 }' 00:15:01.417 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.417 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.983 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:02.241 [2024-07-13 11:26:36.841725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.241 [2024-07-13 11:26:36.841766] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:02.241 [2024-07-13 11:26:36.841776] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:02.241 [2024-07-13 11:26:36.841928] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:02.241 [2024-07-13 11:26:36.842255] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:02.241 [2024-07-13 11:26:36.842278] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:02.241 BaseBdev2 00:15:02.241 [2024-07-13 11:26:36.842516] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.241 11:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:02.241 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:02.241 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:02.241 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:02.241 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:02.241 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:02.241 11:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:02.499 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:02.758 [ 00:15:02.758 { 00:15:02.758 "name": "BaseBdev2", 00:15:02.758 "aliases": [ 00:15:02.758 "be70e824-d07f-4663-8531-bf1bb600e01a" 00:15:02.758 ], 00:15:02.758 "product_name": "Malloc disk", 00:15:02.758 "block_size": 512, 00:15:02.758 "num_blocks": 65536, 00:15:02.758 "uuid": "be70e824-d07f-4663-8531-bf1bb600e01a", 00:15:02.758 "assigned_rate_limits": { 00:15:02.758 "rw_ios_per_sec": 0, 00:15:02.758 "rw_mbytes_per_sec": 0, 00:15:02.758 "r_mbytes_per_sec": 0, 00:15:02.758 "w_mbytes_per_sec": 0 00:15:02.758 }, 00:15:02.758 "claimed": true, 00:15:02.758 "claim_type": "exclusive_write", 00:15:02.758 "zoned": false, 00:15:02.758 "supported_io_types": { 00:15:02.758 "read": true, 00:15:02.758 "write": true, 00:15:02.758 "unmap": true, 00:15:02.758 "flush": true, 00:15:02.758 "reset": true, 00:15:02.758 "nvme_admin": false, 00:15:02.758 "nvme_io": false, 00:15:02.758 "nvme_io_md": false, 00:15:02.758 "write_zeroes": true, 00:15:02.758 "zcopy": true, 00:15:02.758 "get_zone_info": false, 00:15:02.758 "zone_management": false, 00:15:02.758 "zone_append": false, 00:15:02.758 "compare": false, 00:15:02.758 "compare_and_write": false, 00:15:02.758 "abort": true, 00:15:02.758 "seek_hole": false, 00:15:02.758 "seek_data": false, 00:15:02.758 "copy": true, 00:15:02.758 "nvme_iov_md": false 00:15:02.758 }, 00:15:02.758 "memory_domains": [ 00:15:02.758 { 00:15:02.758 "dma_device_id": "system", 00:15:02.758 "dma_device_type": 1 00:15:02.758 }, 00:15:02.758 { 00:15:02.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.758 "dma_device_type": 2 00:15:02.758 } 00:15:02.758 ], 00:15:02.758 "driver_specific": {} 00:15:02.758 } 00:15:02.758 ] 00:15:02.758 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:02.758 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:02.758 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:02.758 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:02.758 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:02.759 "name": "Existed_Raid", 00:15:02.759 "uuid": "6b73223d-ee1e-4e23-a775-a503c27d20e8", 00:15:02.759 "strip_size_kb": 64, 00:15:02.759 "state": "online", 00:15:02.759 "raid_level": "raid0", 00:15:02.759 "superblock": false, 00:15:02.759 "num_base_bdevs": 2, 00:15:02.759 "num_base_bdevs_discovered": 2, 00:15:02.759 "num_base_bdevs_operational": 2, 00:15:02.759 "base_bdevs_list": [ 00:15:02.759 { 00:15:02.759 "name": "BaseBdev1", 00:15:02.759 "uuid": "508c108e-fa74-40cd-8a75-9192f2326dc0", 00:15:02.759 "is_configured": true, 00:15:02.759 "data_offset": 0, 00:15:02.759 "data_size": 65536 00:15:02.759 }, 00:15:02.759 { 00:15:02.759 "name": "BaseBdev2", 00:15:02.759 "uuid": "be70e824-d07f-4663-8531-bf1bb600e01a", 00:15:02.759 "is_configured": true, 00:15:02.759 "data_offset": 0, 00:15:02.759 "data_size": 65536 00:15:02.759 } 00:15:02.759 ] 00:15:02.759 }' 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:02.759 11:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:03.694 [2024-07-13 11:26:38.386412] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:03.694 "name": "Existed_Raid", 00:15:03.694 "aliases": [ 00:15:03.694 "6b73223d-ee1e-4e23-a775-a503c27d20e8" 00:15:03.694 ], 00:15:03.694 "product_name": "Raid Volume", 00:15:03.694 "block_size": 512, 00:15:03.694 "num_blocks": 131072, 00:15:03.694 "uuid": "6b73223d-ee1e-4e23-a775-a503c27d20e8", 00:15:03.694 "assigned_rate_limits": { 00:15:03.694 "rw_ios_per_sec": 0, 00:15:03.694 "rw_mbytes_per_sec": 0, 00:15:03.694 "r_mbytes_per_sec": 0, 00:15:03.694 "w_mbytes_per_sec": 0 00:15:03.694 }, 00:15:03.694 "claimed": false, 00:15:03.694 "zoned": false, 00:15:03.694 "supported_io_types": { 00:15:03.694 "read": true, 00:15:03.694 "write": true, 00:15:03.694 "unmap": true, 00:15:03.694 "flush": true, 00:15:03.694 "reset": true, 00:15:03.694 "nvme_admin": false, 00:15:03.694 "nvme_io": false, 00:15:03.694 "nvme_io_md": false, 00:15:03.694 "write_zeroes": true, 00:15:03.694 "zcopy": false, 00:15:03.694 "get_zone_info": false, 00:15:03.694 "zone_management": false, 00:15:03.694 "zone_append": false, 00:15:03.694 "compare": false, 00:15:03.694 "compare_and_write": false, 00:15:03.694 "abort": false, 00:15:03.694 "seek_hole": false, 00:15:03.694 "seek_data": false, 00:15:03.694 "copy": false, 00:15:03.694 "nvme_iov_md": false 00:15:03.694 }, 00:15:03.694 "memory_domains": [ 00:15:03.694 { 00:15:03.694 "dma_device_id": "system", 00:15:03.694 "dma_device_type": 1 00:15:03.694 }, 00:15:03.694 { 00:15:03.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.694 "dma_device_type": 2 00:15:03.694 }, 00:15:03.694 { 00:15:03.694 "dma_device_id": "system", 00:15:03.694 "dma_device_type": 1 00:15:03.694 }, 00:15:03.694 { 00:15:03.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.694 "dma_device_type": 2 00:15:03.694 } 00:15:03.694 ], 00:15:03.694 "driver_specific": { 00:15:03.694 "raid": { 00:15:03.694 "uuid": "6b73223d-ee1e-4e23-a775-a503c27d20e8", 00:15:03.694 "strip_size_kb": 64, 00:15:03.694 "state": "online", 00:15:03.694 "raid_level": "raid0", 00:15:03.694 "superblock": false, 00:15:03.694 "num_base_bdevs": 2, 00:15:03.694 "num_base_bdevs_discovered": 2, 00:15:03.694 "num_base_bdevs_operational": 2, 00:15:03.694 "base_bdevs_list": [ 00:15:03.694 { 00:15:03.694 "name": "BaseBdev1", 00:15:03.694 "uuid": "508c108e-fa74-40cd-8a75-9192f2326dc0", 00:15:03.694 "is_configured": true, 00:15:03.694 "data_offset": 0, 00:15:03.694 "data_size": 65536 00:15:03.694 }, 00:15:03.694 { 00:15:03.694 "name": "BaseBdev2", 00:15:03.694 "uuid": "be70e824-d07f-4663-8531-bf1bb600e01a", 00:15:03.694 "is_configured": true, 00:15:03.694 "data_offset": 0, 00:15:03.694 "data_size": 65536 00:15:03.694 } 00:15:03.694 ] 00:15:03.694 } 00:15:03.694 } 00:15:03.694 }' 00:15:03.694 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.954 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:03.954 BaseBdev2' 00:15:03.954 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:03.954 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:03.954 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:03.954 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:03.954 "name": "BaseBdev1", 00:15:03.954 "aliases": [ 00:15:03.954 "508c108e-fa74-40cd-8a75-9192f2326dc0" 00:15:03.954 ], 00:15:03.954 "product_name": "Malloc disk", 00:15:03.954 "block_size": 512, 00:15:03.954 "num_blocks": 65536, 00:15:03.954 "uuid": "508c108e-fa74-40cd-8a75-9192f2326dc0", 00:15:03.954 "assigned_rate_limits": { 00:15:03.954 "rw_ios_per_sec": 0, 00:15:03.954 "rw_mbytes_per_sec": 0, 00:15:03.954 "r_mbytes_per_sec": 0, 00:15:03.954 "w_mbytes_per_sec": 0 00:15:03.954 }, 00:15:03.954 "claimed": true, 00:15:03.954 "claim_type": "exclusive_write", 00:15:03.954 "zoned": false, 00:15:03.954 "supported_io_types": { 00:15:03.954 "read": true, 00:15:03.954 "write": true, 00:15:03.954 "unmap": true, 00:15:03.954 "flush": true, 00:15:03.954 "reset": true, 00:15:03.954 "nvme_admin": false, 00:15:03.954 "nvme_io": false, 00:15:03.954 "nvme_io_md": false, 00:15:03.954 "write_zeroes": true, 00:15:03.954 "zcopy": true, 00:15:03.954 "get_zone_info": false, 00:15:03.954 "zone_management": false, 00:15:03.954 "zone_append": false, 00:15:03.954 "compare": false, 00:15:03.954 "compare_and_write": false, 00:15:03.954 "abort": true, 00:15:03.954 "seek_hole": false, 00:15:03.954 "seek_data": false, 00:15:03.954 "copy": true, 00:15:03.954 "nvme_iov_md": false 00:15:03.954 }, 00:15:03.954 "memory_domains": [ 00:15:03.954 { 00:15:03.954 "dma_device_id": "system", 00:15:03.954 "dma_device_type": 1 00:15:03.954 }, 00:15:03.954 { 00:15:03.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.954 "dma_device_type": 2 00:15:03.954 } 00:15:03.954 ], 00:15:03.954 "driver_specific": {} 00:15:03.954 }' 00:15:03.954 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:03.954 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.213 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:04.213 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.213 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.213 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.213 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.213 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.213 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.213 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.472 11:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.472 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.472 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:04.472 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:04.472 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:04.472 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:04.472 "name": "BaseBdev2", 00:15:04.472 "aliases": [ 00:15:04.472 "be70e824-d07f-4663-8531-bf1bb600e01a" 00:15:04.472 ], 00:15:04.472 "product_name": "Malloc disk", 00:15:04.472 "block_size": 512, 00:15:04.472 "num_blocks": 65536, 00:15:04.472 "uuid": "be70e824-d07f-4663-8531-bf1bb600e01a", 00:15:04.472 "assigned_rate_limits": { 00:15:04.472 "rw_ios_per_sec": 0, 00:15:04.472 "rw_mbytes_per_sec": 0, 00:15:04.472 "r_mbytes_per_sec": 0, 00:15:04.472 "w_mbytes_per_sec": 0 00:15:04.472 }, 00:15:04.472 "claimed": true, 00:15:04.472 "claim_type": "exclusive_write", 00:15:04.472 "zoned": false, 00:15:04.472 "supported_io_types": { 00:15:04.472 "read": true, 00:15:04.472 "write": true, 00:15:04.472 "unmap": true, 00:15:04.472 "flush": true, 00:15:04.472 "reset": true, 00:15:04.472 "nvme_admin": false, 00:15:04.472 "nvme_io": false, 00:15:04.472 "nvme_io_md": false, 00:15:04.472 "write_zeroes": true, 00:15:04.472 "zcopy": true, 00:15:04.472 "get_zone_info": false, 00:15:04.472 "zone_management": false, 00:15:04.472 "zone_append": false, 00:15:04.472 "compare": false, 00:15:04.472 "compare_and_write": false, 00:15:04.472 "abort": true, 00:15:04.472 "seek_hole": false, 00:15:04.472 "seek_data": false, 00:15:04.472 "copy": true, 00:15:04.472 "nvme_iov_md": false 00:15:04.472 }, 00:15:04.472 "memory_domains": [ 00:15:04.472 { 00:15:04.472 "dma_device_id": "system", 00:15:04.472 "dma_device_type": 1 00:15:04.472 }, 00:15:04.472 { 00:15:04.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.472 "dma_device_type": 2 00:15:04.472 } 00:15:04.472 ], 00:15:04.472 "driver_specific": {} 00:15:04.472 }' 00:15:04.472 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.731 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.731 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:04.731 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.731 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.731 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.731 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.731 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.989 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.989 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.989 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.989 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.989 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:05.248 [2024-07-13 11:26:39.770530] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.248 [2024-07-13 11:26:39.770564] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.248 [2024-07-13 11:26:39.770634] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.248 11:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.508 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.508 "name": "Existed_Raid", 00:15:05.508 "uuid": "6b73223d-ee1e-4e23-a775-a503c27d20e8", 00:15:05.508 "strip_size_kb": 64, 00:15:05.508 "state": "offline", 00:15:05.508 "raid_level": "raid0", 00:15:05.508 "superblock": false, 00:15:05.508 "num_base_bdevs": 2, 00:15:05.508 "num_base_bdevs_discovered": 1, 00:15:05.508 "num_base_bdevs_operational": 1, 00:15:05.508 "base_bdevs_list": [ 00:15:05.508 { 00:15:05.508 "name": null, 00:15:05.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.508 "is_configured": false, 00:15:05.508 "data_offset": 0, 00:15:05.508 "data_size": 65536 00:15:05.508 }, 00:15:05.508 { 00:15:05.508 "name": "BaseBdev2", 00:15:05.508 "uuid": "be70e824-d07f-4663-8531-bf1bb600e01a", 00:15:05.508 "is_configured": true, 00:15:05.508 "data_offset": 0, 00:15:05.508 "data_size": 65536 00:15:05.508 } 00:15:05.508 ] 00:15:05.508 }' 00:15:05.508 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.508 11:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.075 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:06.075 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:06.075 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.075 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:06.333 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:06.333 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:06.333 11:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:06.592 [2024-07-13 11:26:41.175963] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:06.592 [2024-07-13 11:26:41.176040] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:06.592 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:06.592 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:06.592 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.592 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 120298 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 120298 ']' 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 120298 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120298 00:15:06.851 killing process with pid 120298 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120298' 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 120298 00:15:06.851 11:26:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 120298 00:15:06.851 [2024-07-13 11:26:41.516176] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:06.851 [2024-07-13 11:26:41.516288] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.228 ************************************ 00:15:08.228 END TEST raid_state_function_test 00:15:08.228 ************************************ 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:08.228 00:15:08.228 real 0m11.049s 00:15:08.228 user 0m19.746s 00:15:08.228 sys 0m1.121s 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.228 11:26:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:08.228 11:26:42 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:15:08.228 11:26:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:08.228 11:26:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.228 11:26:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.228 ************************************ 00:15:08.228 START TEST raid_state_function_test_sb 00:15:08.228 ************************************ 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=120689 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 120689' 00:15:08.228 Process raid pid: 120689 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 120689 /var/tmp/spdk-raid.sock 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 120689 ']' 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.228 11:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.228 [2024-07-13 11:26:42.680967] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:08.228 [2024-07-13 11:26:42.681143] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.228 [2024-07-13 11:26:42.840321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.486 [2024-07-13 11:26:43.080950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.744 [2024-07-13 11:26:43.271042] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.003 11:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.003 11:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:09.003 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:09.261 [2024-07-13 11:26:43.793246] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.261 [2024-07-13 11:26:43.793748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.261 [2024-07-13 11:26:43.793776] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.261 [2024-07-13 11:26:43.793920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.261 11:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.519 11:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.519 "name": "Existed_Raid", 00:15:09.519 "uuid": "bceedaed-bca7-4a42-a4da-a7708d5061cf", 00:15:09.519 "strip_size_kb": 64, 00:15:09.519 "state": "configuring", 00:15:09.519 "raid_level": "raid0", 00:15:09.519 "superblock": true, 00:15:09.519 "num_base_bdevs": 2, 00:15:09.519 "num_base_bdevs_discovered": 0, 00:15:09.519 "num_base_bdevs_operational": 2, 00:15:09.519 "base_bdevs_list": [ 00:15:09.519 { 00:15:09.519 "name": "BaseBdev1", 00:15:09.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.519 "is_configured": false, 00:15:09.519 "data_offset": 0, 00:15:09.519 "data_size": 0 00:15:09.519 }, 00:15:09.519 { 00:15:09.519 "name": "BaseBdev2", 00:15:09.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.519 "is_configured": false, 00:15:09.519 "data_offset": 0, 00:15:09.519 "data_size": 0 00:15:09.519 } 00:15:09.519 ] 00:15:09.519 }' 00:15:09.519 11:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.519 11:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.085 11:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:10.343 [2024-07-13 11:26:44.861282] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.343 [2024-07-13 11:26:44.861313] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:10.343 11:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:10.601 [2024-07-13 11:26:45.101346] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:10.601 [2024-07-13 11:26:45.101621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:10.601 [2024-07-13 11:26:45.101641] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:10.601 [2024-07-13 11:26:45.101749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:10.601 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.860 [2024-07-13 11:26:45.371938] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.860 BaseBdev1 00:15:10.860 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:10.860 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:10.860 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.860 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:10.860 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.860 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.860 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:10.860 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:11.118 [ 00:15:11.118 { 00:15:11.118 "name": "BaseBdev1", 00:15:11.118 "aliases": [ 00:15:11.118 "aea1e390-29a8-4474-a7f0-b7fb9d7d83cd" 00:15:11.118 ], 00:15:11.118 "product_name": "Malloc disk", 00:15:11.118 "block_size": 512, 00:15:11.118 "num_blocks": 65536, 00:15:11.118 "uuid": "aea1e390-29a8-4474-a7f0-b7fb9d7d83cd", 00:15:11.118 "assigned_rate_limits": { 00:15:11.118 "rw_ios_per_sec": 0, 00:15:11.118 "rw_mbytes_per_sec": 0, 00:15:11.118 "r_mbytes_per_sec": 0, 00:15:11.118 "w_mbytes_per_sec": 0 00:15:11.118 }, 00:15:11.119 "claimed": true, 00:15:11.119 "claim_type": "exclusive_write", 00:15:11.119 "zoned": false, 00:15:11.119 "supported_io_types": { 00:15:11.119 "read": true, 00:15:11.119 "write": true, 00:15:11.119 "unmap": true, 00:15:11.119 "flush": true, 00:15:11.119 "reset": true, 00:15:11.119 "nvme_admin": false, 00:15:11.119 "nvme_io": false, 00:15:11.119 "nvme_io_md": false, 00:15:11.119 "write_zeroes": true, 00:15:11.119 "zcopy": true, 00:15:11.119 "get_zone_info": false, 00:15:11.119 "zone_management": false, 00:15:11.119 "zone_append": false, 00:15:11.119 "compare": false, 00:15:11.119 "compare_and_write": false, 00:15:11.119 "abort": true, 00:15:11.119 "seek_hole": false, 00:15:11.119 "seek_data": false, 00:15:11.119 "copy": true, 00:15:11.119 "nvme_iov_md": false 00:15:11.119 }, 00:15:11.119 "memory_domains": [ 00:15:11.119 { 00:15:11.119 "dma_device_id": "system", 00:15:11.119 "dma_device_type": 1 00:15:11.119 }, 00:15:11.119 { 00:15:11.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.119 "dma_device_type": 2 00:15:11.119 } 00:15:11.119 ], 00:15:11.119 "driver_specific": {} 00:15:11.119 } 00:15:11.119 ] 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.119 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.377 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.377 "name": "Existed_Raid", 00:15:11.377 "uuid": "252791e5-b756-481a-a803-91f56f795f65", 00:15:11.377 "strip_size_kb": 64, 00:15:11.377 "state": "configuring", 00:15:11.377 "raid_level": "raid0", 00:15:11.377 "superblock": true, 00:15:11.377 "num_base_bdevs": 2, 00:15:11.377 "num_base_bdevs_discovered": 1, 00:15:11.377 "num_base_bdevs_operational": 2, 00:15:11.377 "base_bdevs_list": [ 00:15:11.377 { 00:15:11.377 "name": "BaseBdev1", 00:15:11.377 "uuid": "aea1e390-29a8-4474-a7f0-b7fb9d7d83cd", 00:15:11.377 "is_configured": true, 00:15:11.377 "data_offset": 2048, 00:15:11.377 "data_size": 63488 00:15:11.377 }, 00:15:11.377 { 00:15:11.377 "name": "BaseBdev2", 00:15:11.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.377 "is_configured": false, 00:15:11.377 "data_offset": 0, 00:15:11.377 "data_size": 0 00:15:11.377 } 00:15:11.377 ] 00:15:11.377 }' 00:15:11.377 11:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.378 11:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.944 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:12.200 [2024-07-13 11:26:46.800219] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.200 [2024-07-13 11:26:46.800261] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:15:12.200 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:12.458 [2024-07-13 11:26:46.976294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.458 [2024-07-13 11:26:46.978141] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:12.458 [2024-07-13 11:26:46.978554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.458 11:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.715 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.715 "name": "Existed_Raid", 00:15:12.715 "uuid": "39b74ea1-34ec-4c31-8950-0073aa1351fc", 00:15:12.715 "strip_size_kb": 64, 00:15:12.715 "state": "configuring", 00:15:12.715 "raid_level": "raid0", 00:15:12.715 "superblock": true, 00:15:12.715 "num_base_bdevs": 2, 00:15:12.715 "num_base_bdevs_discovered": 1, 00:15:12.715 "num_base_bdevs_operational": 2, 00:15:12.715 "base_bdevs_list": [ 00:15:12.715 { 00:15:12.715 "name": "BaseBdev1", 00:15:12.715 "uuid": "aea1e390-29a8-4474-a7f0-b7fb9d7d83cd", 00:15:12.715 "is_configured": true, 00:15:12.715 "data_offset": 2048, 00:15:12.715 "data_size": 63488 00:15:12.715 }, 00:15:12.715 { 00:15:12.715 "name": "BaseBdev2", 00:15:12.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.715 "is_configured": false, 00:15:12.715 "data_offset": 0, 00:15:12.715 "data_size": 0 00:15:12.715 } 00:15:12.715 ] 00:15:12.715 }' 00:15:12.715 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.715 11:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.281 11:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:13.539 [2024-07-13 11:26:48.189469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.539 [2024-07-13 11:26:48.189691] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:13.539 [2024-07-13 11:26:48.189707] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:13.539 [2024-07-13 11:26:48.189835] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:13.539 BaseBdev2 00:15:13.539 [2024-07-13 11:26:48.190163] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:13.539 [2024-07-13 11:26:48.190184] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:13.539 [2024-07-13 11:26:48.190327] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.539 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:13.539 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:13.539 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:13.539 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:13.539 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:13.539 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:13.539 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:13.796 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:14.054 [ 00:15:14.054 { 00:15:14.054 "name": "BaseBdev2", 00:15:14.054 "aliases": [ 00:15:14.054 "43896e5e-1e79-45ec-8e48-a64755402ecf" 00:15:14.054 ], 00:15:14.054 "product_name": "Malloc disk", 00:15:14.054 "block_size": 512, 00:15:14.054 "num_blocks": 65536, 00:15:14.054 "uuid": "43896e5e-1e79-45ec-8e48-a64755402ecf", 00:15:14.054 "assigned_rate_limits": { 00:15:14.054 "rw_ios_per_sec": 0, 00:15:14.054 "rw_mbytes_per_sec": 0, 00:15:14.054 "r_mbytes_per_sec": 0, 00:15:14.054 "w_mbytes_per_sec": 0 00:15:14.054 }, 00:15:14.054 "claimed": true, 00:15:14.054 "claim_type": "exclusive_write", 00:15:14.054 "zoned": false, 00:15:14.054 "supported_io_types": { 00:15:14.054 "read": true, 00:15:14.054 "write": true, 00:15:14.054 "unmap": true, 00:15:14.054 "flush": true, 00:15:14.054 "reset": true, 00:15:14.054 "nvme_admin": false, 00:15:14.054 "nvme_io": false, 00:15:14.054 "nvme_io_md": false, 00:15:14.054 "write_zeroes": true, 00:15:14.054 "zcopy": true, 00:15:14.054 "get_zone_info": false, 00:15:14.054 "zone_management": false, 00:15:14.054 "zone_append": false, 00:15:14.054 "compare": false, 00:15:14.054 "compare_and_write": false, 00:15:14.054 "abort": true, 00:15:14.054 "seek_hole": false, 00:15:14.054 "seek_data": false, 00:15:14.054 "copy": true, 00:15:14.054 "nvme_iov_md": false 00:15:14.054 }, 00:15:14.054 "memory_domains": [ 00:15:14.054 { 00:15:14.054 "dma_device_id": "system", 00:15:14.054 "dma_device_type": 1 00:15:14.054 }, 00:15:14.054 { 00:15:14.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.054 "dma_device_type": 2 00:15:14.054 } 00:15:14.054 ], 00:15:14.054 "driver_specific": {} 00:15:14.054 } 00:15:14.054 ] 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.054 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.312 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.312 "name": "Existed_Raid", 00:15:14.312 "uuid": "39b74ea1-34ec-4c31-8950-0073aa1351fc", 00:15:14.312 "strip_size_kb": 64, 00:15:14.312 "state": "online", 00:15:14.312 "raid_level": "raid0", 00:15:14.312 "superblock": true, 00:15:14.312 "num_base_bdevs": 2, 00:15:14.312 "num_base_bdevs_discovered": 2, 00:15:14.312 "num_base_bdevs_operational": 2, 00:15:14.312 "base_bdevs_list": [ 00:15:14.312 { 00:15:14.312 "name": "BaseBdev1", 00:15:14.312 "uuid": "aea1e390-29a8-4474-a7f0-b7fb9d7d83cd", 00:15:14.312 "is_configured": true, 00:15:14.312 "data_offset": 2048, 00:15:14.312 "data_size": 63488 00:15:14.312 }, 00:15:14.312 { 00:15:14.312 "name": "BaseBdev2", 00:15:14.312 "uuid": "43896e5e-1e79-45ec-8e48-a64755402ecf", 00:15:14.312 "is_configured": true, 00:15:14.312 "data_offset": 2048, 00:15:14.312 "data_size": 63488 00:15:14.312 } 00:15:14.312 ] 00:15:14.312 }' 00:15:14.312 11:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.312 11:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.876 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:14.876 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:14.876 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:14.876 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:14.876 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:14.876 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:14.876 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:14.876 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:15.134 [2024-07-13 11:26:49.669990] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.134 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:15.134 "name": "Existed_Raid", 00:15:15.134 "aliases": [ 00:15:15.134 "39b74ea1-34ec-4c31-8950-0073aa1351fc" 00:15:15.134 ], 00:15:15.134 "product_name": "Raid Volume", 00:15:15.134 "block_size": 512, 00:15:15.134 "num_blocks": 126976, 00:15:15.134 "uuid": "39b74ea1-34ec-4c31-8950-0073aa1351fc", 00:15:15.134 "assigned_rate_limits": { 00:15:15.134 "rw_ios_per_sec": 0, 00:15:15.134 "rw_mbytes_per_sec": 0, 00:15:15.134 "r_mbytes_per_sec": 0, 00:15:15.134 "w_mbytes_per_sec": 0 00:15:15.134 }, 00:15:15.134 "claimed": false, 00:15:15.134 "zoned": false, 00:15:15.134 "supported_io_types": { 00:15:15.134 "read": true, 00:15:15.134 "write": true, 00:15:15.134 "unmap": true, 00:15:15.134 "flush": true, 00:15:15.134 "reset": true, 00:15:15.134 "nvme_admin": false, 00:15:15.134 "nvme_io": false, 00:15:15.134 "nvme_io_md": false, 00:15:15.134 "write_zeroes": true, 00:15:15.134 "zcopy": false, 00:15:15.134 "get_zone_info": false, 00:15:15.134 "zone_management": false, 00:15:15.134 "zone_append": false, 00:15:15.134 "compare": false, 00:15:15.134 "compare_and_write": false, 00:15:15.134 "abort": false, 00:15:15.134 "seek_hole": false, 00:15:15.134 "seek_data": false, 00:15:15.134 "copy": false, 00:15:15.134 "nvme_iov_md": false 00:15:15.134 }, 00:15:15.134 "memory_domains": [ 00:15:15.134 { 00:15:15.134 "dma_device_id": "system", 00:15:15.134 "dma_device_type": 1 00:15:15.134 }, 00:15:15.134 { 00:15:15.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.134 "dma_device_type": 2 00:15:15.134 }, 00:15:15.134 { 00:15:15.134 "dma_device_id": "system", 00:15:15.134 "dma_device_type": 1 00:15:15.134 }, 00:15:15.134 { 00:15:15.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.134 "dma_device_type": 2 00:15:15.134 } 00:15:15.134 ], 00:15:15.134 "driver_specific": { 00:15:15.134 "raid": { 00:15:15.134 "uuid": "39b74ea1-34ec-4c31-8950-0073aa1351fc", 00:15:15.134 "strip_size_kb": 64, 00:15:15.134 "state": "online", 00:15:15.134 "raid_level": "raid0", 00:15:15.134 "superblock": true, 00:15:15.134 "num_base_bdevs": 2, 00:15:15.134 "num_base_bdevs_discovered": 2, 00:15:15.134 "num_base_bdevs_operational": 2, 00:15:15.134 "base_bdevs_list": [ 00:15:15.134 { 00:15:15.134 "name": "BaseBdev1", 00:15:15.134 "uuid": "aea1e390-29a8-4474-a7f0-b7fb9d7d83cd", 00:15:15.134 "is_configured": true, 00:15:15.134 "data_offset": 2048, 00:15:15.134 "data_size": 63488 00:15:15.134 }, 00:15:15.134 { 00:15:15.134 "name": "BaseBdev2", 00:15:15.134 "uuid": "43896e5e-1e79-45ec-8e48-a64755402ecf", 00:15:15.134 "is_configured": true, 00:15:15.134 "data_offset": 2048, 00:15:15.134 "data_size": 63488 00:15:15.134 } 00:15:15.134 ] 00:15:15.134 } 00:15:15.134 } 00:15:15.134 }' 00:15:15.135 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:15.135 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:15.135 BaseBdev2' 00:15:15.135 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:15.135 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:15.135 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:15.392 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:15.392 "name": "BaseBdev1", 00:15:15.392 "aliases": [ 00:15:15.392 "aea1e390-29a8-4474-a7f0-b7fb9d7d83cd" 00:15:15.392 ], 00:15:15.392 "product_name": "Malloc disk", 00:15:15.392 "block_size": 512, 00:15:15.392 "num_blocks": 65536, 00:15:15.392 "uuid": "aea1e390-29a8-4474-a7f0-b7fb9d7d83cd", 00:15:15.392 "assigned_rate_limits": { 00:15:15.392 "rw_ios_per_sec": 0, 00:15:15.392 "rw_mbytes_per_sec": 0, 00:15:15.392 "r_mbytes_per_sec": 0, 00:15:15.392 "w_mbytes_per_sec": 0 00:15:15.392 }, 00:15:15.392 "claimed": true, 00:15:15.393 "claim_type": "exclusive_write", 00:15:15.393 "zoned": false, 00:15:15.393 "supported_io_types": { 00:15:15.393 "read": true, 00:15:15.393 "write": true, 00:15:15.393 "unmap": true, 00:15:15.393 "flush": true, 00:15:15.393 "reset": true, 00:15:15.393 "nvme_admin": false, 00:15:15.393 "nvme_io": false, 00:15:15.393 "nvme_io_md": false, 00:15:15.393 "write_zeroes": true, 00:15:15.393 "zcopy": true, 00:15:15.393 "get_zone_info": false, 00:15:15.393 "zone_management": false, 00:15:15.393 "zone_append": false, 00:15:15.393 "compare": false, 00:15:15.393 "compare_and_write": false, 00:15:15.393 "abort": true, 00:15:15.393 "seek_hole": false, 00:15:15.393 "seek_data": false, 00:15:15.393 "copy": true, 00:15:15.393 "nvme_iov_md": false 00:15:15.393 }, 00:15:15.393 "memory_domains": [ 00:15:15.393 { 00:15:15.393 "dma_device_id": "system", 00:15:15.393 "dma_device_type": 1 00:15:15.393 }, 00:15:15.393 { 00:15:15.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.393 "dma_device_type": 2 00:15:15.393 } 00:15:15.393 ], 00:15:15.393 "driver_specific": {} 00:15:15.393 }' 00:15:15.393 11:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.393 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.393 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:15.393 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.393 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:15.650 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:15.909 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:15.909 "name": "BaseBdev2", 00:15:15.909 "aliases": [ 00:15:15.909 "43896e5e-1e79-45ec-8e48-a64755402ecf" 00:15:15.909 ], 00:15:15.909 "product_name": "Malloc disk", 00:15:15.909 "block_size": 512, 00:15:15.909 "num_blocks": 65536, 00:15:15.909 "uuid": "43896e5e-1e79-45ec-8e48-a64755402ecf", 00:15:15.909 "assigned_rate_limits": { 00:15:15.909 "rw_ios_per_sec": 0, 00:15:15.909 "rw_mbytes_per_sec": 0, 00:15:15.909 "r_mbytes_per_sec": 0, 00:15:15.909 "w_mbytes_per_sec": 0 00:15:15.909 }, 00:15:15.909 "claimed": true, 00:15:15.909 "claim_type": "exclusive_write", 00:15:15.909 "zoned": false, 00:15:15.909 "supported_io_types": { 00:15:15.909 "read": true, 00:15:15.909 "write": true, 00:15:15.909 "unmap": true, 00:15:15.909 "flush": true, 00:15:15.909 "reset": true, 00:15:15.909 "nvme_admin": false, 00:15:15.909 "nvme_io": false, 00:15:15.909 "nvme_io_md": false, 00:15:15.909 "write_zeroes": true, 00:15:15.909 "zcopy": true, 00:15:15.909 "get_zone_info": false, 00:15:15.909 "zone_management": false, 00:15:15.909 "zone_append": false, 00:15:15.909 "compare": false, 00:15:15.909 "compare_and_write": false, 00:15:15.909 "abort": true, 00:15:15.909 "seek_hole": false, 00:15:15.909 "seek_data": false, 00:15:15.909 "copy": true, 00:15:15.909 "nvme_iov_md": false 00:15:15.909 }, 00:15:15.909 "memory_domains": [ 00:15:15.909 { 00:15:15.909 "dma_device_id": "system", 00:15:15.909 "dma_device_type": 1 00:15:15.909 }, 00:15:15.909 { 00:15:15.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.909 "dma_device_type": 2 00:15:15.909 } 00:15:15.909 ], 00:15:15.909 "driver_specific": {} 00:15:15.909 }' 00:15:15.909 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.909 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:16.167 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:16.167 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.167 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.167 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:16.167 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.167 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.167 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:16.167 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.425 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.425 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:16.425 11:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:16.704 [2024-07-13 11:26:51.218146] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.704 [2024-07-13 11:26:51.218173] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.704 [2024-07-13 11:26:51.218228] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.704 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.962 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.962 "name": "Existed_Raid", 00:15:16.962 "uuid": "39b74ea1-34ec-4c31-8950-0073aa1351fc", 00:15:16.962 "strip_size_kb": 64, 00:15:16.962 "state": "offline", 00:15:16.962 "raid_level": "raid0", 00:15:16.962 "superblock": true, 00:15:16.962 "num_base_bdevs": 2, 00:15:16.962 "num_base_bdevs_discovered": 1, 00:15:16.962 "num_base_bdevs_operational": 1, 00:15:16.962 "base_bdevs_list": [ 00:15:16.962 { 00:15:16.962 "name": null, 00:15:16.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.962 "is_configured": false, 00:15:16.962 "data_offset": 2048, 00:15:16.962 "data_size": 63488 00:15:16.962 }, 00:15:16.962 { 00:15:16.962 "name": "BaseBdev2", 00:15:16.963 "uuid": "43896e5e-1e79-45ec-8e48-a64755402ecf", 00:15:16.963 "is_configured": true, 00:15:16.963 "data_offset": 2048, 00:15:16.963 "data_size": 63488 00:15:16.963 } 00:15:16.963 ] 00:15:16.963 }' 00:15:16.963 11:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.963 11:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.529 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:17.529 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:17.529 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.529 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:17.787 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:17.787 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.787 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:18.046 [2024-07-13 11:26:52.648861] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.046 [2024-07-13 11:26:52.648927] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:18.046 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:18.046 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:18.046 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.046 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.303 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:18.303 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:18.303 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:18.303 11:26:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 120689 00:15:18.303 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 120689 ']' 00:15:18.303 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 120689 00:15:18.303 11:26:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:18.303 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.303 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 120689 00:15:18.303 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:18.303 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:18.303 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 120689' 00:15:18.303 killing process with pid 120689 00:15:18.303 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 120689 00:15:18.303 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 120689 00:15:18.303 [2024-07-13 11:26:53.021487] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:18.303 [2024-07-13 11:26:53.021587] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:19.301 ************************************ 00:15:19.301 END TEST raid_state_function_test_sb 00:15:19.301 ************************************ 00:15:19.301 11:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:19.301 00:15:19.301 real 0m11.314s 00:15:19.301 user 0m20.259s 00:15:19.301 sys 0m1.245s 00:15:19.301 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:19.301 11:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.301 11:26:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:19.301 11:26:53 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:15:19.301 11:26:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:19.301 11:26:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:19.301 11:26:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:19.301 ************************************ 00:15:19.301 START TEST raid_superblock_test 00:15:19.301 ************************************ 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=121084 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 121084 /var/tmp/spdk-raid.sock 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 121084 ']' 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:19.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:19.301 11:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.559 [2024-07-13 11:26:54.070093] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:19.559 [2024-07-13 11:26:54.070307] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121084 ] 00:15:19.559 [2024-07-13 11:26:54.244682] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.818 [2024-07-13 11:26:54.485626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.078 [2024-07-13 11:26:54.675484] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:20.337 11:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:20.596 malloc1 00:15:20.596 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.855 [2024-07-13 11:26:55.400127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.855 [2024-07-13 11:26:55.400265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.855 [2024-07-13 11:26:55.400296] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:15:20.855 [2024-07-13 11:26:55.400314] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.855 [2024-07-13 11:26:55.402193] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.855 [2024-07-13 11:26:55.402239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.855 pt1 00:15:20.855 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:20.855 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:20.855 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:20.855 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:20.855 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:20.855 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:20.855 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:20.855 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:20.855 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:21.114 malloc2 00:15:21.114 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.373 [2024-07-13 11:26:55.898735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.373 [2024-07-13 11:26:55.898837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.373 [2024-07-13 11:26:55.898883] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:15:21.373 [2024-07-13 11:26:55.898905] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.373 [2024-07-13 11:26:55.900918] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.373 [2024-07-13 11:26:55.900964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.373 pt2 00:15:21.373 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:21.373 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:21.373 11:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:21.373 [2024-07-13 11:26:56.090807] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:21.373 [2024-07-13 11:26:56.092832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.373 [2024-07-13 11:26:56.093050] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:15:21.373 [2024-07-13 11:26:56.093070] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:21.373 [2024-07-13 11:26:56.093182] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:21.373 [2024-07-13 11:26:56.093525] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:15:21.373 [2024-07-13 11:26:56.093545] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:15:21.373 [2024-07-13 11:26:56.093724] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.374 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.632 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.632 "name": "raid_bdev1", 00:15:21.632 "uuid": "f346d9ed-b8ba-482a-b212-f905cae6ff97", 00:15:21.632 "strip_size_kb": 64, 00:15:21.632 "state": "online", 00:15:21.632 "raid_level": "raid0", 00:15:21.632 "superblock": true, 00:15:21.632 "num_base_bdevs": 2, 00:15:21.632 "num_base_bdevs_discovered": 2, 00:15:21.632 "num_base_bdevs_operational": 2, 00:15:21.632 "base_bdevs_list": [ 00:15:21.632 { 00:15:21.632 "name": "pt1", 00:15:21.632 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:21.632 "is_configured": true, 00:15:21.632 "data_offset": 2048, 00:15:21.632 "data_size": 63488 00:15:21.632 }, 00:15:21.632 { 00:15:21.632 "name": "pt2", 00:15:21.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.632 "is_configured": true, 00:15:21.632 "data_offset": 2048, 00:15:21.632 "data_size": 63488 00:15:21.632 } 00:15:21.632 ] 00:15:21.632 }' 00:15:21.632 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.632 11:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.567 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:22.567 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:22.567 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:22.567 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:22.567 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:22.567 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:22.567 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:22.567 11:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:22.568 [2024-07-13 11:26:57.179242] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.568 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:22.568 "name": "raid_bdev1", 00:15:22.568 "aliases": [ 00:15:22.568 "f346d9ed-b8ba-482a-b212-f905cae6ff97" 00:15:22.568 ], 00:15:22.568 "product_name": "Raid Volume", 00:15:22.568 "block_size": 512, 00:15:22.568 "num_blocks": 126976, 00:15:22.568 "uuid": "f346d9ed-b8ba-482a-b212-f905cae6ff97", 00:15:22.568 "assigned_rate_limits": { 00:15:22.568 "rw_ios_per_sec": 0, 00:15:22.568 "rw_mbytes_per_sec": 0, 00:15:22.568 "r_mbytes_per_sec": 0, 00:15:22.568 "w_mbytes_per_sec": 0 00:15:22.568 }, 00:15:22.568 "claimed": false, 00:15:22.568 "zoned": false, 00:15:22.568 "supported_io_types": { 00:15:22.568 "read": true, 00:15:22.568 "write": true, 00:15:22.568 "unmap": true, 00:15:22.568 "flush": true, 00:15:22.568 "reset": true, 00:15:22.568 "nvme_admin": false, 00:15:22.568 "nvme_io": false, 00:15:22.568 "nvme_io_md": false, 00:15:22.568 "write_zeroes": true, 00:15:22.568 "zcopy": false, 00:15:22.568 "get_zone_info": false, 00:15:22.568 "zone_management": false, 00:15:22.568 "zone_append": false, 00:15:22.568 "compare": false, 00:15:22.568 "compare_and_write": false, 00:15:22.568 "abort": false, 00:15:22.568 "seek_hole": false, 00:15:22.568 "seek_data": false, 00:15:22.568 "copy": false, 00:15:22.568 "nvme_iov_md": false 00:15:22.568 }, 00:15:22.568 "memory_domains": [ 00:15:22.568 { 00:15:22.568 "dma_device_id": "system", 00:15:22.568 "dma_device_type": 1 00:15:22.568 }, 00:15:22.568 { 00:15:22.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.568 "dma_device_type": 2 00:15:22.568 }, 00:15:22.568 { 00:15:22.568 "dma_device_id": "system", 00:15:22.568 "dma_device_type": 1 00:15:22.568 }, 00:15:22.568 { 00:15:22.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.568 "dma_device_type": 2 00:15:22.568 } 00:15:22.568 ], 00:15:22.568 "driver_specific": { 00:15:22.568 "raid": { 00:15:22.568 "uuid": "f346d9ed-b8ba-482a-b212-f905cae6ff97", 00:15:22.568 "strip_size_kb": 64, 00:15:22.568 "state": "online", 00:15:22.568 "raid_level": "raid0", 00:15:22.568 "superblock": true, 00:15:22.568 "num_base_bdevs": 2, 00:15:22.568 "num_base_bdevs_discovered": 2, 00:15:22.568 "num_base_bdevs_operational": 2, 00:15:22.568 "base_bdevs_list": [ 00:15:22.568 { 00:15:22.568 "name": "pt1", 00:15:22.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:22.568 "is_configured": true, 00:15:22.568 "data_offset": 2048, 00:15:22.568 "data_size": 63488 00:15:22.568 }, 00:15:22.568 { 00:15:22.568 "name": "pt2", 00:15:22.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.568 "is_configured": true, 00:15:22.568 "data_offset": 2048, 00:15:22.568 "data_size": 63488 00:15:22.568 } 00:15:22.568 ] 00:15:22.568 } 00:15:22.568 } 00:15:22.568 }' 00:15:22.568 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:22.568 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:22.568 pt2' 00:15:22.568 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:22.568 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:22.568 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:22.827 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:22.827 "name": "pt1", 00:15:22.827 "aliases": [ 00:15:22.827 "00000000-0000-0000-0000-000000000001" 00:15:22.827 ], 00:15:22.827 "product_name": "passthru", 00:15:22.827 "block_size": 512, 00:15:22.827 "num_blocks": 65536, 00:15:22.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:22.827 "assigned_rate_limits": { 00:15:22.827 "rw_ios_per_sec": 0, 00:15:22.827 "rw_mbytes_per_sec": 0, 00:15:22.827 "r_mbytes_per_sec": 0, 00:15:22.827 "w_mbytes_per_sec": 0 00:15:22.827 }, 00:15:22.827 "claimed": true, 00:15:22.827 "claim_type": "exclusive_write", 00:15:22.827 "zoned": false, 00:15:22.827 "supported_io_types": { 00:15:22.827 "read": true, 00:15:22.827 "write": true, 00:15:22.827 "unmap": true, 00:15:22.827 "flush": true, 00:15:22.827 "reset": true, 00:15:22.827 "nvme_admin": false, 00:15:22.827 "nvme_io": false, 00:15:22.827 "nvme_io_md": false, 00:15:22.827 "write_zeroes": true, 00:15:22.827 "zcopy": true, 00:15:22.827 "get_zone_info": false, 00:15:22.827 "zone_management": false, 00:15:22.827 "zone_append": false, 00:15:22.827 "compare": false, 00:15:22.827 "compare_and_write": false, 00:15:22.827 "abort": true, 00:15:22.827 "seek_hole": false, 00:15:22.827 "seek_data": false, 00:15:22.827 "copy": true, 00:15:22.827 "nvme_iov_md": false 00:15:22.827 }, 00:15:22.827 "memory_domains": [ 00:15:22.827 { 00:15:22.827 "dma_device_id": "system", 00:15:22.827 "dma_device_type": 1 00:15:22.827 }, 00:15:22.827 { 00:15:22.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.827 "dma_device_type": 2 00:15:22.827 } 00:15:22.827 ], 00:15:22.827 "driver_specific": { 00:15:22.827 "passthru": { 00:15:22.827 "name": "pt1", 00:15:22.827 "base_bdev_name": "malloc1" 00:15:22.827 } 00:15:22.827 } 00:15:22.827 }' 00:15:22.827 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.827 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:22.827 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:22.827 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:22.827 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:23.086 11:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:23.344 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:23.344 "name": "pt2", 00:15:23.344 "aliases": [ 00:15:23.344 "00000000-0000-0000-0000-000000000002" 00:15:23.344 ], 00:15:23.344 "product_name": "passthru", 00:15:23.344 "block_size": 512, 00:15:23.344 "num_blocks": 65536, 00:15:23.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:23.344 "assigned_rate_limits": { 00:15:23.344 "rw_ios_per_sec": 0, 00:15:23.344 "rw_mbytes_per_sec": 0, 00:15:23.344 "r_mbytes_per_sec": 0, 00:15:23.344 "w_mbytes_per_sec": 0 00:15:23.344 }, 00:15:23.344 "claimed": true, 00:15:23.344 "claim_type": "exclusive_write", 00:15:23.344 "zoned": false, 00:15:23.344 "supported_io_types": { 00:15:23.344 "read": true, 00:15:23.344 "write": true, 00:15:23.344 "unmap": true, 00:15:23.344 "flush": true, 00:15:23.344 "reset": true, 00:15:23.344 "nvme_admin": false, 00:15:23.344 "nvme_io": false, 00:15:23.344 "nvme_io_md": false, 00:15:23.344 "write_zeroes": true, 00:15:23.344 "zcopy": true, 00:15:23.344 "get_zone_info": false, 00:15:23.344 "zone_management": false, 00:15:23.344 "zone_append": false, 00:15:23.344 "compare": false, 00:15:23.344 "compare_and_write": false, 00:15:23.344 "abort": true, 00:15:23.344 "seek_hole": false, 00:15:23.344 "seek_data": false, 00:15:23.344 "copy": true, 00:15:23.344 "nvme_iov_md": false 00:15:23.344 }, 00:15:23.344 "memory_domains": [ 00:15:23.344 { 00:15:23.344 "dma_device_id": "system", 00:15:23.344 "dma_device_type": 1 00:15:23.344 }, 00:15:23.344 { 00:15:23.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.344 "dma_device_type": 2 00:15:23.344 } 00:15:23.344 ], 00:15:23.344 "driver_specific": { 00:15:23.344 "passthru": { 00:15:23.344 "name": "pt2", 00:15:23.344 "base_bdev_name": "malloc2" 00:15:23.344 } 00:15:23.344 } 00:15:23.344 }' 00:15:23.344 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:23.344 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:23.603 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:23.603 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:23.603 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:23.603 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:23.603 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:23.603 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:23.603 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:23.603 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:23.603 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:23.861 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:23.861 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:23.861 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:23.861 [2024-07-13 11:26:58.591884] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.861 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f346d9ed-b8ba-482a-b212-f905cae6ff97 00:15:23.861 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f346d9ed-b8ba-482a-b212-f905cae6ff97 ']' 00:15:23.861 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:24.120 [2024-07-13 11:26:58.779888] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:24.120 [2024-07-13 11:26:58.779913] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.120 [2024-07-13 11:26:58.779994] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.120 [2024-07-13 11:26:58.780037] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.120 [2024-07-13 11:26:58.780046] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:15:24.120 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.120 11:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:24.379 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:24.379 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:24.379 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.379 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:24.636 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:24.636 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:24.893 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:24.893 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:25.151 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:15:25.409 [2024-07-13 11:26:59.988146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:25.409 [2024-07-13 11:26:59.989683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:25.409 [2024-07-13 11:26:59.989755] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:25.409 [2024-07-13 11:26:59.989833] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:25.409 [2024-07-13 11:26:59.989862] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:25.409 [2024-07-13 11:26:59.989872] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:15:25.409 request: 00:15:25.409 { 00:15:25.409 "name": "raid_bdev1", 00:15:25.409 "raid_level": "raid0", 00:15:25.409 "base_bdevs": [ 00:15:25.409 "malloc1", 00:15:25.409 "malloc2" 00:15:25.409 ], 00:15:25.409 "strip_size_kb": 64, 00:15:25.409 "superblock": false, 00:15:25.409 "method": "bdev_raid_create", 00:15:25.409 "req_id": 1 00:15:25.409 } 00:15:25.409 Got JSON-RPC error response 00:15:25.409 response: 00:15:25.409 { 00:15:25.409 "code": -17, 00:15:25.409 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:25.409 } 00:15:25.409 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:25.409 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:25.409 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:25.409 11:26:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:25.409 11:26:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.409 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:25.667 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:25.667 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:25.667 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:25.925 [2024-07-13 11:27:00.504190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:25.925 [2024-07-13 11:27:00.504246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.925 [2024-07-13 11:27:00.504274] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:25.925 [2024-07-13 11:27:00.504297] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.925 [2024-07-13 11:27:00.506462] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.925 [2024-07-13 11:27:00.506523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:25.925 [2024-07-13 11:27:00.506607] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:25.925 [2024-07-13 11:27:00.506666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:25.925 pt1 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.925 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.183 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.183 "name": "raid_bdev1", 00:15:26.183 "uuid": "f346d9ed-b8ba-482a-b212-f905cae6ff97", 00:15:26.183 "strip_size_kb": 64, 00:15:26.183 "state": "configuring", 00:15:26.183 "raid_level": "raid0", 00:15:26.183 "superblock": true, 00:15:26.183 "num_base_bdevs": 2, 00:15:26.183 "num_base_bdevs_discovered": 1, 00:15:26.183 "num_base_bdevs_operational": 2, 00:15:26.183 "base_bdevs_list": [ 00:15:26.183 { 00:15:26.183 "name": "pt1", 00:15:26.183 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:26.183 "is_configured": true, 00:15:26.183 "data_offset": 2048, 00:15:26.183 "data_size": 63488 00:15:26.183 }, 00:15:26.183 { 00:15:26.183 "name": null, 00:15:26.183 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:26.183 "is_configured": false, 00:15:26.183 "data_offset": 2048, 00:15:26.183 "data_size": 63488 00:15:26.183 } 00:15:26.183 ] 00:15:26.183 }' 00:15:26.183 11:27:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.183 11:27:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.749 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:26.749 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:26.749 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:26.749 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:27.006 [2024-07-13 11:27:01.696574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:27.006 [2024-07-13 11:27:01.696681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.006 [2024-07-13 11:27:01.696717] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:27.006 [2024-07-13 11:27:01.696742] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.006 [2024-07-13 11:27:01.697316] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.006 [2024-07-13 11:27:01.697403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:27.006 [2024-07-13 11:27:01.697505] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:27.006 [2024-07-13 11:27:01.697534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:27.006 [2024-07-13 11:27:01.697689] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:15:27.007 [2024-07-13 11:27:01.697714] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:27.007 [2024-07-13 11:27:01.697822] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:27.007 [2024-07-13 11:27:01.698148] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:15:27.007 [2024-07-13 11:27:01.698174] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:15:27.007 [2024-07-13 11:27:01.698309] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.007 pt2 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.007 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.264 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.264 "name": "raid_bdev1", 00:15:27.264 "uuid": "f346d9ed-b8ba-482a-b212-f905cae6ff97", 00:15:27.264 "strip_size_kb": 64, 00:15:27.264 "state": "online", 00:15:27.264 "raid_level": "raid0", 00:15:27.264 "superblock": true, 00:15:27.264 "num_base_bdevs": 2, 00:15:27.264 "num_base_bdevs_discovered": 2, 00:15:27.264 "num_base_bdevs_operational": 2, 00:15:27.264 "base_bdevs_list": [ 00:15:27.264 { 00:15:27.264 "name": "pt1", 00:15:27.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:27.264 "is_configured": true, 00:15:27.264 "data_offset": 2048, 00:15:27.264 "data_size": 63488 00:15:27.264 }, 00:15:27.264 { 00:15:27.264 "name": "pt2", 00:15:27.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:27.264 "is_configured": true, 00:15:27.264 "data_offset": 2048, 00:15:27.264 "data_size": 63488 00:15:27.264 } 00:15:27.264 ] 00:15:27.264 }' 00:15:27.264 11:27:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.264 11:27:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.828 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:27.828 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:27.828 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:27.828 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:27.828 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:27.828 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:27.828 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:27.828 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:28.086 [2024-07-13 11:27:02.724963] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.086 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:28.086 "name": "raid_bdev1", 00:15:28.086 "aliases": [ 00:15:28.086 "f346d9ed-b8ba-482a-b212-f905cae6ff97" 00:15:28.086 ], 00:15:28.086 "product_name": "Raid Volume", 00:15:28.086 "block_size": 512, 00:15:28.086 "num_blocks": 126976, 00:15:28.086 "uuid": "f346d9ed-b8ba-482a-b212-f905cae6ff97", 00:15:28.086 "assigned_rate_limits": { 00:15:28.086 "rw_ios_per_sec": 0, 00:15:28.086 "rw_mbytes_per_sec": 0, 00:15:28.086 "r_mbytes_per_sec": 0, 00:15:28.086 "w_mbytes_per_sec": 0 00:15:28.086 }, 00:15:28.086 "claimed": false, 00:15:28.086 "zoned": false, 00:15:28.086 "supported_io_types": { 00:15:28.086 "read": true, 00:15:28.086 "write": true, 00:15:28.086 "unmap": true, 00:15:28.086 "flush": true, 00:15:28.086 "reset": true, 00:15:28.086 "nvme_admin": false, 00:15:28.086 "nvme_io": false, 00:15:28.086 "nvme_io_md": false, 00:15:28.086 "write_zeroes": true, 00:15:28.086 "zcopy": false, 00:15:28.086 "get_zone_info": false, 00:15:28.086 "zone_management": false, 00:15:28.086 "zone_append": false, 00:15:28.086 "compare": false, 00:15:28.086 "compare_and_write": false, 00:15:28.086 "abort": false, 00:15:28.086 "seek_hole": false, 00:15:28.086 "seek_data": false, 00:15:28.086 "copy": false, 00:15:28.086 "nvme_iov_md": false 00:15:28.086 }, 00:15:28.086 "memory_domains": [ 00:15:28.086 { 00:15:28.086 "dma_device_id": "system", 00:15:28.086 "dma_device_type": 1 00:15:28.086 }, 00:15:28.086 { 00:15:28.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.086 "dma_device_type": 2 00:15:28.086 }, 00:15:28.086 { 00:15:28.086 "dma_device_id": "system", 00:15:28.086 "dma_device_type": 1 00:15:28.086 }, 00:15:28.086 { 00:15:28.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.086 "dma_device_type": 2 00:15:28.086 } 00:15:28.086 ], 00:15:28.086 "driver_specific": { 00:15:28.086 "raid": { 00:15:28.086 "uuid": "f346d9ed-b8ba-482a-b212-f905cae6ff97", 00:15:28.086 "strip_size_kb": 64, 00:15:28.086 "state": "online", 00:15:28.086 "raid_level": "raid0", 00:15:28.086 "superblock": true, 00:15:28.086 "num_base_bdevs": 2, 00:15:28.086 "num_base_bdevs_discovered": 2, 00:15:28.086 "num_base_bdevs_operational": 2, 00:15:28.086 "base_bdevs_list": [ 00:15:28.086 { 00:15:28.086 "name": "pt1", 00:15:28.086 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.086 "is_configured": true, 00:15:28.086 "data_offset": 2048, 00:15:28.086 "data_size": 63488 00:15:28.086 }, 00:15:28.086 { 00:15:28.086 "name": "pt2", 00:15:28.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:28.086 "is_configured": true, 00:15:28.086 "data_offset": 2048, 00:15:28.086 "data_size": 63488 00:15:28.086 } 00:15:28.086 ] 00:15:28.086 } 00:15:28.086 } 00:15:28.086 }' 00:15:28.086 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.086 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:28.086 pt2' 00:15:28.086 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:28.086 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:28.086 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:28.343 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:28.343 "name": "pt1", 00:15:28.343 "aliases": [ 00:15:28.343 "00000000-0000-0000-0000-000000000001" 00:15:28.343 ], 00:15:28.343 "product_name": "passthru", 00:15:28.343 "block_size": 512, 00:15:28.343 "num_blocks": 65536, 00:15:28.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:28.343 "assigned_rate_limits": { 00:15:28.343 "rw_ios_per_sec": 0, 00:15:28.343 "rw_mbytes_per_sec": 0, 00:15:28.343 "r_mbytes_per_sec": 0, 00:15:28.343 "w_mbytes_per_sec": 0 00:15:28.343 }, 00:15:28.343 "claimed": true, 00:15:28.343 "claim_type": "exclusive_write", 00:15:28.343 "zoned": false, 00:15:28.343 "supported_io_types": { 00:15:28.343 "read": true, 00:15:28.343 "write": true, 00:15:28.343 "unmap": true, 00:15:28.343 "flush": true, 00:15:28.343 "reset": true, 00:15:28.343 "nvme_admin": false, 00:15:28.343 "nvme_io": false, 00:15:28.343 "nvme_io_md": false, 00:15:28.343 "write_zeroes": true, 00:15:28.343 "zcopy": true, 00:15:28.343 "get_zone_info": false, 00:15:28.343 "zone_management": false, 00:15:28.343 "zone_append": false, 00:15:28.343 "compare": false, 00:15:28.343 "compare_and_write": false, 00:15:28.343 "abort": true, 00:15:28.343 "seek_hole": false, 00:15:28.343 "seek_data": false, 00:15:28.343 "copy": true, 00:15:28.343 "nvme_iov_md": false 00:15:28.343 }, 00:15:28.343 "memory_domains": [ 00:15:28.343 { 00:15:28.343 "dma_device_id": "system", 00:15:28.343 "dma_device_type": 1 00:15:28.343 }, 00:15:28.343 { 00:15:28.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.343 "dma_device_type": 2 00:15:28.343 } 00:15:28.343 ], 00:15:28.343 "driver_specific": { 00:15:28.343 "passthru": { 00:15:28.343 "name": "pt1", 00:15:28.343 "base_bdev_name": "malloc1" 00:15:28.343 } 00:15:28.343 } 00:15:28.343 }' 00:15:28.343 11:27:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.343 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:28.600 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:28.600 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.600 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:28.600 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:28.600 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.600 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:28.600 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:28.600 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.858 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:28.858 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:28.858 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:28.858 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:28.858 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:29.115 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:29.115 "name": "pt2", 00:15:29.115 "aliases": [ 00:15:29.115 "00000000-0000-0000-0000-000000000002" 00:15:29.115 ], 00:15:29.115 "product_name": "passthru", 00:15:29.115 "block_size": 512, 00:15:29.115 "num_blocks": 65536, 00:15:29.116 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:29.116 "assigned_rate_limits": { 00:15:29.116 "rw_ios_per_sec": 0, 00:15:29.116 "rw_mbytes_per_sec": 0, 00:15:29.116 "r_mbytes_per_sec": 0, 00:15:29.116 "w_mbytes_per_sec": 0 00:15:29.116 }, 00:15:29.116 "claimed": true, 00:15:29.116 "claim_type": "exclusive_write", 00:15:29.116 "zoned": false, 00:15:29.116 "supported_io_types": { 00:15:29.116 "read": true, 00:15:29.116 "write": true, 00:15:29.116 "unmap": true, 00:15:29.116 "flush": true, 00:15:29.116 "reset": true, 00:15:29.116 "nvme_admin": false, 00:15:29.116 "nvme_io": false, 00:15:29.116 "nvme_io_md": false, 00:15:29.116 "write_zeroes": true, 00:15:29.116 "zcopy": true, 00:15:29.116 "get_zone_info": false, 00:15:29.116 "zone_management": false, 00:15:29.116 "zone_append": false, 00:15:29.116 "compare": false, 00:15:29.116 "compare_and_write": false, 00:15:29.116 "abort": true, 00:15:29.116 "seek_hole": false, 00:15:29.116 "seek_data": false, 00:15:29.116 "copy": true, 00:15:29.116 "nvme_iov_md": false 00:15:29.116 }, 00:15:29.116 "memory_domains": [ 00:15:29.116 { 00:15:29.116 "dma_device_id": "system", 00:15:29.116 "dma_device_type": 1 00:15:29.116 }, 00:15:29.116 { 00:15:29.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.116 "dma_device_type": 2 00:15:29.116 } 00:15:29.116 ], 00:15:29.116 "driver_specific": { 00:15:29.116 "passthru": { 00:15:29.116 "name": "pt2", 00:15:29.116 "base_bdev_name": "malloc2" 00:15:29.116 } 00:15:29.116 } 00:15:29.116 }' 00:15:29.116 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:29.116 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:29.116 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:29.116 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:29.373 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:29.373 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:29.373 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:29.373 11:27:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:29.373 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:29.373 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:29.373 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:29.630 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:29.630 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:29.630 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:29.887 [2024-07-13 11:27:04.402157] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f346d9ed-b8ba-482a-b212-f905cae6ff97 '!=' f346d9ed-b8ba-482a-b212-f905cae6ff97 ']' 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 121084 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 121084 ']' 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 121084 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.887 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121084 00:15:29.888 killing process with pid 121084 00:15:29.888 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:29.888 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:29.888 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121084' 00:15:29.888 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 121084 00:15:29.888 11:27:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 121084 00:15:29.888 [2024-07-13 11:27:04.437146] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.888 [2024-07-13 11:27:04.437206] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.888 [2024-07-13 11:27:04.437282] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.888 [2024-07-13 11:27:04.437292] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:15:29.888 [2024-07-13 11:27:04.564705] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.818 ************************************ 00:15:30.818 END TEST raid_superblock_test 00:15:30.818 ************************************ 00:15:30.818 11:27:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:30.818 00:15:30.818 real 0m11.481s 00:15:30.818 user 0m20.683s 00:15:30.818 sys 0m1.239s 00:15:30.818 11:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.818 11:27:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.818 11:27:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:30.818 11:27:05 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:15:30.818 11:27:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:30.818 11:27:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.818 11:27:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.818 ************************************ 00:15:30.818 START TEST raid_read_error_test 00:15:30.818 ************************************ 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.sJtFGj9sxm 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=121471 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 121471 /var/tmp/spdk-raid.sock 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 121471 ']' 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.818 11:27:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.075 [2024-07-13 11:27:05.605071] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:31.075 [2024-07-13 11:27:05.605256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121471 ] 00:15:31.075 [2024-07-13 11:27:05.760430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.332 [2024-07-13 11:27:06.003406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.590 [2024-07-13 11:27:06.189095] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.157 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.157 11:27:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:32.157 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:32.157 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:32.157 BaseBdev1_malloc 00:15:32.157 11:27:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:32.415 true 00:15:32.415 11:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:32.673 [2024-07-13 11:27:07.354311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:32.673 [2024-07-13 11:27:07.354411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.673 [2024-07-13 11:27:07.354449] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:32.673 [2024-07-13 11:27:07.354470] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.673 [2024-07-13 11:27:07.356711] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.674 [2024-07-13 11:27:07.356756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:32.674 BaseBdev1 00:15:32.674 11:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:32.674 11:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:32.932 BaseBdev2_malloc 00:15:32.932 11:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:33.190 true 00:15:33.190 11:27:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:33.448 [2024-07-13 11:27:08.022738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:33.449 [2024-07-13 11:27:08.022827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.449 [2024-07-13 11:27:08.022875] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:33.449 [2024-07-13 11:27:08.022899] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.449 [2024-07-13 11:27:08.025092] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.449 [2024-07-13 11:27:08.025137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:33.449 BaseBdev2 00:15:33.449 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:33.706 [2024-07-13 11:27:08.214810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.706 [2024-07-13 11:27:08.216811] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.706 [2024-07-13 11:27:08.217064] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:15:33.706 [2024-07-13 11:27:08.217080] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:33.706 [2024-07-13 11:27:08.217189] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:33.706 [2024-07-13 11:27:08.217521] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:15:33.706 [2024-07-13 11:27:08.217542] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:15:33.706 [2024-07-13 11:27:08.217678] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.706 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.707 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.965 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.965 "name": "raid_bdev1", 00:15:33.965 "uuid": "01b82cf6-7bcd-4dec-a3b9-c915f5dc14ef", 00:15:33.965 "strip_size_kb": 64, 00:15:33.965 "state": "online", 00:15:33.965 "raid_level": "raid0", 00:15:33.965 "superblock": true, 00:15:33.965 "num_base_bdevs": 2, 00:15:33.965 "num_base_bdevs_discovered": 2, 00:15:33.965 "num_base_bdevs_operational": 2, 00:15:33.965 "base_bdevs_list": [ 00:15:33.965 { 00:15:33.965 "name": "BaseBdev1", 00:15:33.965 "uuid": "9900d3b5-e46a-5142-abb4-7501d82ac692", 00:15:33.965 "is_configured": true, 00:15:33.965 "data_offset": 2048, 00:15:33.965 "data_size": 63488 00:15:33.965 }, 00:15:33.965 { 00:15:33.965 "name": "BaseBdev2", 00:15:33.965 "uuid": "3816c397-21f6-5a6b-b9ed-5873df0ceeb2", 00:15:33.965 "is_configured": true, 00:15:33.965 "data_offset": 2048, 00:15:33.965 "data_size": 63488 00:15:33.965 } 00:15:33.965 ] 00:15:33.965 }' 00:15:33.965 11:27:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.965 11:27:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.533 11:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:34.533 11:27:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:34.533 [2024-07-13 11:27:09.180165] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:15:35.468 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:35.726 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.727 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.727 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.727 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.727 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.727 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.985 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.985 "name": "raid_bdev1", 00:15:35.985 "uuid": "01b82cf6-7bcd-4dec-a3b9-c915f5dc14ef", 00:15:35.985 "strip_size_kb": 64, 00:15:35.985 "state": "online", 00:15:35.985 "raid_level": "raid0", 00:15:35.985 "superblock": true, 00:15:35.985 "num_base_bdevs": 2, 00:15:35.985 "num_base_bdevs_discovered": 2, 00:15:35.985 "num_base_bdevs_operational": 2, 00:15:35.985 "base_bdevs_list": [ 00:15:35.985 { 00:15:35.985 "name": "BaseBdev1", 00:15:35.985 "uuid": "9900d3b5-e46a-5142-abb4-7501d82ac692", 00:15:35.985 "is_configured": true, 00:15:35.985 "data_offset": 2048, 00:15:35.985 "data_size": 63488 00:15:35.985 }, 00:15:35.985 { 00:15:35.985 "name": "BaseBdev2", 00:15:35.985 "uuid": "3816c397-21f6-5a6b-b9ed-5873df0ceeb2", 00:15:35.985 "is_configured": true, 00:15:35.985 "data_offset": 2048, 00:15:35.985 "data_size": 63488 00:15:35.985 } 00:15:35.985 ] 00:15:35.985 }' 00:15:35.985 11:27:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.985 11:27:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.552 11:27:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:36.811 [2024-07-13 11:27:11.512499] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:36.811 [2024-07-13 11:27:11.512569] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.811 [2024-07-13 11:27:11.515069] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.811 [2024-07-13 11:27:11.515114] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.811 [2024-07-13 11:27:11.515151] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.811 [2024-07-13 11:27:11.515161] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:15:36.811 0 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 121471 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 121471 ']' 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 121471 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121471 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121471' 00:15:36.811 killing process with pid 121471 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 121471 00:15:36.811 11:27:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 121471 00:15:36.811 [2024-07-13 11:27:11.546868] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:37.069 [2024-07-13 11:27:11.680961] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.004 11:27:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.sJtFGj9sxm 00:15:38.004 11:27:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:38.004 11:27:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:38.004 ************************************ 00:15:38.004 END TEST raid_read_error_test 00:15:38.004 ************************************ 00:15:38.004 11:27:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:15:38.004 11:27:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:38.004 11:27:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:38.004 11:27:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:38.005 11:27:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:15:38.005 00:15:38.005 real 0m7.116s 00:15:38.005 user 0m10.778s 00:15:38.005 sys 0m0.845s 00:15:38.005 11:27:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.005 11:27:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.005 11:27:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:38.005 11:27:12 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:15:38.005 11:27:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:38.005 11:27:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.005 11:27:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.005 ************************************ 00:15:38.005 START TEST raid_write_error_test 00:15:38.005 ************************************ 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.YwmQsAWjLb 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=121683 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 121683 /var/tmp/spdk-raid.sock 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 121683 ']' 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.005 11:27:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.268 [2024-07-13 11:27:12.774536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:38.269 [2024-07-13 11:27:12.774760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121683 ] 00:15:38.269 [2024-07-13 11:27:12.927076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.527 [2024-07-13 11:27:13.091802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.527 [2024-07-13 11:27:13.255986] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.093 11:27:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:39.093 11:27:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:39.093 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:39.093 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.351 BaseBdev1_malloc 00:15:39.351 11:27:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:39.609 true 00:15:39.609 11:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:39.868 [2024-07-13 11:27:14.403571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:39.868 [2024-07-13 11:27:14.403679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.868 [2024-07-13 11:27:14.403718] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:39.868 [2024-07-13 11:27:14.403740] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.868 [2024-07-13 11:27:14.406506] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.868 [2024-07-13 11:27:14.406578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.868 BaseBdev1 00:15:39.868 11:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:39.868 11:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:40.126 BaseBdev2_malloc 00:15:40.126 11:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:40.384 true 00:15:40.384 11:27:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:40.642 [2024-07-13 11:27:15.197668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:40.642 [2024-07-13 11:27:15.197778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.642 [2024-07-13 11:27:15.197820] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:15:40.642 [2024-07-13 11:27:15.197842] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.642 [2024-07-13 11:27:15.199954] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.642 [2024-07-13 11:27:15.200005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:40.642 BaseBdev2 00:15:40.642 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:40.642 [2024-07-13 11:27:15.385742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.642 [2024-07-13 11:27:15.387629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.642 [2024-07-13 11:27:15.387853] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:15:40.642 [2024-07-13 11:27:15.387902] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:40.642 [2024-07-13 11:27:15.388064] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:40.900 [2024-07-13 11:27:15.388454] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:15:40.900 [2024-07-13 11:27:15.388481] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:15:40.900 [2024-07-13 11:27:15.388656] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.900 "name": "raid_bdev1", 00:15:40.900 "uuid": "5d362753-0872-4010-89c9-94c2da5e11eb", 00:15:40.900 "strip_size_kb": 64, 00:15:40.900 "state": "online", 00:15:40.900 "raid_level": "raid0", 00:15:40.900 "superblock": true, 00:15:40.900 "num_base_bdevs": 2, 00:15:40.900 "num_base_bdevs_discovered": 2, 00:15:40.900 "num_base_bdevs_operational": 2, 00:15:40.900 "base_bdevs_list": [ 00:15:40.900 { 00:15:40.900 "name": "BaseBdev1", 00:15:40.900 "uuid": "80637593-1abc-583a-83df-8c9e11cbf49f", 00:15:40.900 "is_configured": true, 00:15:40.900 "data_offset": 2048, 00:15:40.900 "data_size": 63488 00:15:40.900 }, 00:15:40.900 { 00:15:40.900 "name": "BaseBdev2", 00:15:40.900 "uuid": "be10dc2d-f4d0-5050-89fd-3ed7aeaa2b71", 00:15:40.900 "is_configured": true, 00:15:40.900 "data_offset": 2048, 00:15:40.900 "data_size": 63488 00:15:40.900 } 00:15:40.900 ] 00:15:40.900 }' 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.900 11:27:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.837 11:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:41.837 11:27:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:41.837 [2024-07-13 11:27:16.322919] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.772 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.030 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.030 "name": "raid_bdev1", 00:15:43.030 "uuid": "5d362753-0872-4010-89c9-94c2da5e11eb", 00:15:43.030 "strip_size_kb": 64, 00:15:43.030 "state": "online", 00:15:43.030 "raid_level": "raid0", 00:15:43.030 "superblock": true, 00:15:43.030 "num_base_bdevs": 2, 00:15:43.030 "num_base_bdevs_discovered": 2, 00:15:43.030 "num_base_bdevs_operational": 2, 00:15:43.030 "base_bdevs_list": [ 00:15:43.030 { 00:15:43.030 "name": "BaseBdev1", 00:15:43.030 "uuid": "80637593-1abc-583a-83df-8c9e11cbf49f", 00:15:43.030 "is_configured": true, 00:15:43.030 "data_offset": 2048, 00:15:43.030 "data_size": 63488 00:15:43.030 }, 00:15:43.030 { 00:15:43.030 "name": "BaseBdev2", 00:15:43.030 "uuid": "be10dc2d-f4d0-5050-89fd-3ed7aeaa2b71", 00:15:43.030 "is_configured": true, 00:15:43.030 "data_offset": 2048, 00:15:43.030 "data_size": 63488 00:15:43.030 } 00:15:43.030 ] 00:15:43.030 }' 00:15:43.030 11:27:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.030 11:27:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:43.965 [2024-07-13 11:27:18.626684] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.965 [2024-07-13 11:27:18.626741] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.965 [2024-07-13 11:27:18.629401] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.965 [2024-07-13 11:27:18.629451] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.965 [2024-07-13 11:27:18.629488] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.965 [2024-07-13 11:27:18.629500] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:15:43.965 0 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 121683 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 121683 ']' 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 121683 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121683 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:43.965 killing process with pid 121683 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121683' 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 121683 00:15:43.965 [2024-07-13 11:27:18.660849] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:43.965 11:27:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 121683 00:15:44.223 [2024-07-13 11:27:18.744764] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.YwmQsAWjLb 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:45.158 ************************************ 00:15:45.158 END TEST raid_write_error_test 00:15:45.158 ************************************ 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:15:45.158 00:15:45.158 real 0m7.015s 00:15:45.158 user 0m10.824s 00:15:45.158 sys 0m0.712s 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:45.158 11:27:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.158 11:27:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:45.158 11:27:19 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:45.158 11:27:19 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:45.158 11:27:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:45.159 11:27:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:45.159 11:27:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:45.159 ************************************ 00:15:45.159 START TEST raid_state_function_test 00:15:45.159 ************************************ 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=121883 00:15:45.159 Process raid pid: 121883 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 121883' 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 121883 /var/tmp/spdk-raid.sock 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 121883 ']' 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:45.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:45.159 11:27:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.159 [2024-07-13 11:27:19.872581] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:45.159 [2024-07-13 11:27:19.872773] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.416 [2024-07-13 11:27:20.034832] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.673 [2024-07-13 11:27:20.198327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.673 [2024-07-13 11:27:20.369681] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.238 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.238 11:27:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:46.238 11:27:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:46.512 [2024-07-13 11:27:21.061016] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.512 [2024-07-13 11:27:21.061089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.512 [2024-07-13 11:27:21.061118] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.512 [2024-07-13 11:27:21.061141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.512 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.794 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.794 "name": "Existed_Raid", 00:15:46.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.794 "strip_size_kb": 64, 00:15:46.794 "state": "configuring", 00:15:46.794 "raid_level": "concat", 00:15:46.794 "superblock": false, 00:15:46.794 "num_base_bdevs": 2, 00:15:46.794 "num_base_bdevs_discovered": 0, 00:15:46.794 "num_base_bdevs_operational": 2, 00:15:46.794 "base_bdevs_list": [ 00:15:46.794 { 00:15:46.794 "name": "BaseBdev1", 00:15:46.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.794 "is_configured": false, 00:15:46.794 "data_offset": 0, 00:15:46.794 "data_size": 0 00:15:46.794 }, 00:15:46.794 { 00:15:46.794 "name": "BaseBdev2", 00:15:46.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.794 "is_configured": false, 00:15:46.794 "data_offset": 0, 00:15:46.794 "data_size": 0 00:15:46.794 } 00:15:46.794 ] 00:15:46.794 }' 00:15:46.794 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.794 11:27:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.372 11:27:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:47.631 [2024-07-13 11:27:22.249135] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.631 [2024-07-13 11:27:22.249168] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:47.631 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:47.890 [2024-07-13 11:27:22.501194] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:47.890 [2024-07-13 11:27:22.501248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:47.890 [2024-07-13 11:27:22.501274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.890 [2024-07-13 11:27:22.501296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.890 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:48.149 [2024-07-13 11:27:22.758543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.149 BaseBdev1 00:15:48.149 11:27:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:48.149 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:48.149 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:48.149 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:48.149 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:48.149 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:48.149 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.407 11:27:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:48.407 [ 00:15:48.407 { 00:15:48.407 "name": "BaseBdev1", 00:15:48.407 "aliases": [ 00:15:48.407 "7cc99c58-6dce-439c-b5db-dce4482710d1" 00:15:48.407 ], 00:15:48.407 "product_name": "Malloc disk", 00:15:48.407 "block_size": 512, 00:15:48.407 "num_blocks": 65536, 00:15:48.407 "uuid": "7cc99c58-6dce-439c-b5db-dce4482710d1", 00:15:48.407 "assigned_rate_limits": { 00:15:48.407 "rw_ios_per_sec": 0, 00:15:48.407 "rw_mbytes_per_sec": 0, 00:15:48.407 "r_mbytes_per_sec": 0, 00:15:48.407 "w_mbytes_per_sec": 0 00:15:48.407 }, 00:15:48.407 "claimed": true, 00:15:48.407 "claim_type": "exclusive_write", 00:15:48.407 "zoned": false, 00:15:48.407 "supported_io_types": { 00:15:48.407 "read": true, 00:15:48.407 "write": true, 00:15:48.407 "unmap": true, 00:15:48.407 "flush": true, 00:15:48.407 "reset": true, 00:15:48.407 "nvme_admin": false, 00:15:48.407 "nvme_io": false, 00:15:48.407 "nvme_io_md": false, 00:15:48.407 "write_zeroes": true, 00:15:48.407 "zcopy": true, 00:15:48.407 "get_zone_info": false, 00:15:48.407 "zone_management": false, 00:15:48.407 "zone_append": false, 00:15:48.407 "compare": false, 00:15:48.407 "compare_and_write": false, 00:15:48.407 "abort": true, 00:15:48.407 "seek_hole": false, 00:15:48.407 "seek_data": false, 00:15:48.407 "copy": true, 00:15:48.407 "nvme_iov_md": false 00:15:48.407 }, 00:15:48.407 "memory_domains": [ 00:15:48.407 { 00:15:48.407 "dma_device_id": "system", 00:15:48.407 "dma_device_type": 1 00:15:48.407 }, 00:15:48.407 { 00:15:48.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.407 "dma_device_type": 2 00:15:48.407 } 00:15:48.407 ], 00:15:48.407 "driver_specific": {} 00:15:48.407 } 00:15:48.407 ] 00:15:48.407 11:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:48.407 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:48.407 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.407 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:48.408 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:48.408 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:48.408 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:48.408 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.408 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.408 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.408 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.666 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.666 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.925 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.925 "name": "Existed_Raid", 00:15:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.925 "strip_size_kb": 64, 00:15:48.925 "state": "configuring", 00:15:48.925 "raid_level": "concat", 00:15:48.925 "superblock": false, 00:15:48.925 "num_base_bdevs": 2, 00:15:48.925 "num_base_bdevs_discovered": 1, 00:15:48.925 "num_base_bdevs_operational": 2, 00:15:48.925 "base_bdevs_list": [ 00:15:48.925 { 00:15:48.925 "name": "BaseBdev1", 00:15:48.925 "uuid": "7cc99c58-6dce-439c-b5db-dce4482710d1", 00:15:48.925 "is_configured": true, 00:15:48.925 "data_offset": 0, 00:15:48.925 "data_size": 65536 00:15:48.925 }, 00:15:48.925 { 00:15:48.925 "name": "BaseBdev2", 00:15:48.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.925 "is_configured": false, 00:15:48.925 "data_offset": 0, 00:15:48.925 "data_size": 0 00:15:48.925 } 00:15:48.925 ] 00:15:48.925 }' 00:15:48.925 11:27:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.925 11:27:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.498 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:49.756 [2024-07-13 11:27:24.326923] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:49.756 [2024-07-13 11:27:24.326992] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:15:49.756 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:50.014 [2024-07-13 11:27:24.518975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:50.014 [2024-07-13 11:27:24.520587] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.014 [2024-07-13 11:27:24.520642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.014 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.272 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.272 "name": "Existed_Raid", 00:15:50.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.272 "strip_size_kb": 64, 00:15:50.272 "state": "configuring", 00:15:50.272 "raid_level": "concat", 00:15:50.272 "superblock": false, 00:15:50.272 "num_base_bdevs": 2, 00:15:50.272 "num_base_bdevs_discovered": 1, 00:15:50.272 "num_base_bdevs_operational": 2, 00:15:50.272 "base_bdevs_list": [ 00:15:50.272 { 00:15:50.272 "name": "BaseBdev1", 00:15:50.272 "uuid": "7cc99c58-6dce-439c-b5db-dce4482710d1", 00:15:50.272 "is_configured": true, 00:15:50.272 "data_offset": 0, 00:15:50.272 "data_size": 65536 00:15:50.272 }, 00:15:50.272 { 00:15:50.272 "name": "BaseBdev2", 00:15:50.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.272 "is_configured": false, 00:15:50.272 "data_offset": 0, 00:15:50.272 "data_size": 0 00:15:50.272 } 00:15:50.272 ] 00:15:50.272 }' 00:15:50.272 11:27:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.272 11:27:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.839 11:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.097 [2024-07-13 11:27:25.654126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.097 [2024-07-13 11:27:25.654193] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:15:51.097 [2024-07-13 11:27:25.654205] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:51.097 [2024-07-13 11:27:25.654332] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:51.097 [2024-07-13 11:27:25.654665] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:15:51.097 [2024-07-13 11:27:25.654688] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:15:51.097 BaseBdev2 00:15:51.097 [2024-07-13 11:27:25.654994] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.097 11:27:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:51.097 11:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:51.097 11:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:51.097 11:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:51.097 11:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:51.097 11:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:51.097 11:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:51.355 11:27:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:51.614 [ 00:15:51.614 { 00:15:51.614 "name": "BaseBdev2", 00:15:51.614 "aliases": [ 00:15:51.614 "ba551e1c-60c5-4856-b0ed-b98a698017f9" 00:15:51.614 ], 00:15:51.614 "product_name": "Malloc disk", 00:15:51.614 "block_size": 512, 00:15:51.614 "num_blocks": 65536, 00:15:51.614 "uuid": "ba551e1c-60c5-4856-b0ed-b98a698017f9", 00:15:51.614 "assigned_rate_limits": { 00:15:51.614 "rw_ios_per_sec": 0, 00:15:51.614 "rw_mbytes_per_sec": 0, 00:15:51.614 "r_mbytes_per_sec": 0, 00:15:51.614 "w_mbytes_per_sec": 0 00:15:51.614 }, 00:15:51.614 "claimed": true, 00:15:51.614 "claim_type": "exclusive_write", 00:15:51.614 "zoned": false, 00:15:51.614 "supported_io_types": { 00:15:51.614 "read": true, 00:15:51.614 "write": true, 00:15:51.614 "unmap": true, 00:15:51.614 "flush": true, 00:15:51.614 "reset": true, 00:15:51.614 "nvme_admin": false, 00:15:51.614 "nvme_io": false, 00:15:51.614 "nvme_io_md": false, 00:15:51.614 "write_zeroes": true, 00:15:51.614 "zcopy": true, 00:15:51.614 "get_zone_info": false, 00:15:51.614 "zone_management": false, 00:15:51.614 "zone_append": false, 00:15:51.614 "compare": false, 00:15:51.614 "compare_and_write": false, 00:15:51.614 "abort": true, 00:15:51.614 "seek_hole": false, 00:15:51.614 "seek_data": false, 00:15:51.614 "copy": true, 00:15:51.614 "nvme_iov_md": false 00:15:51.614 }, 00:15:51.614 "memory_domains": [ 00:15:51.614 { 00:15:51.614 "dma_device_id": "system", 00:15:51.614 "dma_device_type": 1 00:15:51.614 }, 00:15:51.614 { 00:15:51.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.614 "dma_device_type": 2 00:15:51.614 } 00:15:51.614 ], 00:15:51.614 "driver_specific": {} 00:15:51.614 } 00:15:51.614 ] 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.614 "name": "Existed_Raid", 00:15:51.614 "uuid": "ae70ba42-384a-4755-9a67-09add1e3323c", 00:15:51.614 "strip_size_kb": 64, 00:15:51.614 "state": "online", 00:15:51.614 "raid_level": "concat", 00:15:51.614 "superblock": false, 00:15:51.614 "num_base_bdevs": 2, 00:15:51.614 "num_base_bdevs_discovered": 2, 00:15:51.614 "num_base_bdevs_operational": 2, 00:15:51.614 "base_bdevs_list": [ 00:15:51.614 { 00:15:51.614 "name": "BaseBdev1", 00:15:51.614 "uuid": "7cc99c58-6dce-439c-b5db-dce4482710d1", 00:15:51.614 "is_configured": true, 00:15:51.614 "data_offset": 0, 00:15:51.614 "data_size": 65536 00:15:51.614 }, 00:15:51.614 { 00:15:51.614 "name": "BaseBdev2", 00:15:51.614 "uuid": "ba551e1c-60c5-4856-b0ed-b98a698017f9", 00:15:51.614 "is_configured": true, 00:15:51.614 "data_offset": 0, 00:15:51.614 "data_size": 65536 00:15:51.614 } 00:15:51.614 ] 00:15:51.614 }' 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.614 11:27:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.181 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:52.181 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:52.181 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:52.181 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:52.181 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:52.181 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:52.181 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:52.181 11:27:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:52.440 [2024-07-13 11:27:27.070614] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.440 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:52.440 "name": "Existed_Raid", 00:15:52.440 "aliases": [ 00:15:52.440 "ae70ba42-384a-4755-9a67-09add1e3323c" 00:15:52.440 ], 00:15:52.440 "product_name": "Raid Volume", 00:15:52.440 "block_size": 512, 00:15:52.440 "num_blocks": 131072, 00:15:52.440 "uuid": "ae70ba42-384a-4755-9a67-09add1e3323c", 00:15:52.440 "assigned_rate_limits": { 00:15:52.440 "rw_ios_per_sec": 0, 00:15:52.440 "rw_mbytes_per_sec": 0, 00:15:52.440 "r_mbytes_per_sec": 0, 00:15:52.440 "w_mbytes_per_sec": 0 00:15:52.440 }, 00:15:52.440 "claimed": false, 00:15:52.440 "zoned": false, 00:15:52.440 "supported_io_types": { 00:15:52.440 "read": true, 00:15:52.440 "write": true, 00:15:52.440 "unmap": true, 00:15:52.440 "flush": true, 00:15:52.440 "reset": true, 00:15:52.440 "nvme_admin": false, 00:15:52.440 "nvme_io": false, 00:15:52.440 "nvme_io_md": false, 00:15:52.440 "write_zeroes": true, 00:15:52.440 "zcopy": false, 00:15:52.440 "get_zone_info": false, 00:15:52.440 "zone_management": false, 00:15:52.440 "zone_append": false, 00:15:52.440 "compare": false, 00:15:52.440 "compare_and_write": false, 00:15:52.440 "abort": false, 00:15:52.440 "seek_hole": false, 00:15:52.440 "seek_data": false, 00:15:52.440 "copy": false, 00:15:52.440 "nvme_iov_md": false 00:15:52.440 }, 00:15:52.440 "memory_domains": [ 00:15:52.440 { 00:15:52.440 "dma_device_id": "system", 00:15:52.440 "dma_device_type": 1 00:15:52.440 }, 00:15:52.440 { 00:15:52.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.440 "dma_device_type": 2 00:15:52.440 }, 00:15:52.440 { 00:15:52.440 "dma_device_id": "system", 00:15:52.440 "dma_device_type": 1 00:15:52.440 }, 00:15:52.440 { 00:15:52.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.440 "dma_device_type": 2 00:15:52.440 } 00:15:52.440 ], 00:15:52.440 "driver_specific": { 00:15:52.440 "raid": { 00:15:52.440 "uuid": "ae70ba42-384a-4755-9a67-09add1e3323c", 00:15:52.440 "strip_size_kb": 64, 00:15:52.440 "state": "online", 00:15:52.440 "raid_level": "concat", 00:15:52.440 "superblock": false, 00:15:52.440 "num_base_bdevs": 2, 00:15:52.440 "num_base_bdevs_discovered": 2, 00:15:52.440 "num_base_bdevs_operational": 2, 00:15:52.440 "base_bdevs_list": [ 00:15:52.440 { 00:15:52.440 "name": "BaseBdev1", 00:15:52.440 "uuid": "7cc99c58-6dce-439c-b5db-dce4482710d1", 00:15:52.440 "is_configured": true, 00:15:52.440 "data_offset": 0, 00:15:52.440 "data_size": 65536 00:15:52.440 }, 00:15:52.440 { 00:15:52.440 "name": "BaseBdev2", 00:15:52.440 "uuid": "ba551e1c-60c5-4856-b0ed-b98a698017f9", 00:15:52.440 "is_configured": true, 00:15:52.440 "data_offset": 0, 00:15:52.440 "data_size": 65536 00:15:52.440 } 00:15:52.440 ] 00:15:52.440 } 00:15:52.440 } 00:15:52.440 }' 00:15:52.440 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:52.440 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:52.440 BaseBdev2' 00:15:52.440 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:52.440 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:52.440 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:52.699 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:52.699 "name": "BaseBdev1", 00:15:52.699 "aliases": [ 00:15:52.699 "7cc99c58-6dce-439c-b5db-dce4482710d1" 00:15:52.699 ], 00:15:52.699 "product_name": "Malloc disk", 00:15:52.699 "block_size": 512, 00:15:52.699 "num_blocks": 65536, 00:15:52.699 "uuid": "7cc99c58-6dce-439c-b5db-dce4482710d1", 00:15:52.699 "assigned_rate_limits": { 00:15:52.699 "rw_ios_per_sec": 0, 00:15:52.699 "rw_mbytes_per_sec": 0, 00:15:52.699 "r_mbytes_per_sec": 0, 00:15:52.699 "w_mbytes_per_sec": 0 00:15:52.699 }, 00:15:52.699 "claimed": true, 00:15:52.699 "claim_type": "exclusive_write", 00:15:52.699 "zoned": false, 00:15:52.699 "supported_io_types": { 00:15:52.699 "read": true, 00:15:52.699 "write": true, 00:15:52.699 "unmap": true, 00:15:52.699 "flush": true, 00:15:52.699 "reset": true, 00:15:52.699 "nvme_admin": false, 00:15:52.699 "nvme_io": false, 00:15:52.699 "nvme_io_md": false, 00:15:52.699 "write_zeroes": true, 00:15:52.699 "zcopy": true, 00:15:52.699 "get_zone_info": false, 00:15:52.699 "zone_management": false, 00:15:52.699 "zone_append": false, 00:15:52.699 "compare": false, 00:15:52.699 "compare_and_write": false, 00:15:52.699 "abort": true, 00:15:52.699 "seek_hole": false, 00:15:52.699 "seek_data": false, 00:15:52.699 "copy": true, 00:15:52.699 "nvme_iov_md": false 00:15:52.699 }, 00:15:52.699 "memory_domains": [ 00:15:52.699 { 00:15:52.699 "dma_device_id": "system", 00:15:52.699 "dma_device_type": 1 00:15:52.699 }, 00:15:52.699 { 00:15:52.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.699 "dma_device_type": 2 00:15:52.699 } 00:15:52.699 ], 00:15:52.699 "driver_specific": {} 00:15:52.699 }' 00:15:52.699 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.957 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:52.957 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:52.957 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.957 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:52.957 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:52.957 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:52.957 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.215 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:53.215 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.215 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.215 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:53.215 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:53.215 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:53.215 11:27:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:53.474 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:53.474 "name": "BaseBdev2", 00:15:53.474 "aliases": [ 00:15:53.474 "ba551e1c-60c5-4856-b0ed-b98a698017f9" 00:15:53.474 ], 00:15:53.474 "product_name": "Malloc disk", 00:15:53.474 "block_size": 512, 00:15:53.474 "num_blocks": 65536, 00:15:53.474 "uuid": "ba551e1c-60c5-4856-b0ed-b98a698017f9", 00:15:53.474 "assigned_rate_limits": { 00:15:53.474 "rw_ios_per_sec": 0, 00:15:53.474 "rw_mbytes_per_sec": 0, 00:15:53.474 "r_mbytes_per_sec": 0, 00:15:53.474 "w_mbytes_per_sec": 0 00:15:53.474 }, 00:15:53.474 "claimed": true, 00:15:53.474 "claim_type": "exclusive_write", 00:15:53.474 "zoned": false, 00:15:53.474 "supported_io_types": { 00:15:53.474 "read": true, 00:15:53.474 "write": true, 00:15:53.474 "unmap": true, 00:15:53.474 "flush": true, 00:15:53.474 "reset": true, 00:15:53.474 "nvme_admin": false, 00:15:53.474 "nvme_io": false, 00:15:53.474 "nvme_io_md": false, 00:15:53.474 "write_zeroes": true, 00:15:53.474 "zcopy": true, 00:15:53.474 "get_zone_info": false, 00:15:53.474 "zone_management": false, 00:15:53.474 "zone_append": false, 00:15:53.474 "compare": false, 00:15:53.474 "compare_and_write": false, 00:15:53.474 "abort": true, 00:15:53.474 "seek_hole": false, 00:15:53.474 "seek_data": false, 00:15:53.474 "copy": true, 00:15:53.474 "nvme_iov_md": false 00:15:53.474 }, 00:15:53.474 "memory_domains": [ 00:15:53.474 { 00:15:53.474 "dma_device_id": "system", 00:15:53.474 "dma_device_type": 1 00:15:53.474 }, 00:15:53.474 { 00:15:53.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:53.474 "dma_device_type": 2 00:15:53.474 } 00:15:53.474 ], 00:15:53.474 "driver_specific": {} 00:15:53.474 }' 00:15:53.474 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.474 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:53.474 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:53.474 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.733 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:53.733 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:53.733 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.733 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:53.733 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:53.733 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.733 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:53.991 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:53.991 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:54.250 [2024-07-13 11:27:28.774885] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:54.250 [2024-07-13 11:27:28.774919] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.250 [2024-07-13 11:27:28.774983] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.250 11:27:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.508 11:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.509 "name": "Existed_Raid", 00:15:54.509 "uuid": "ae70ba42-384a-4755-9a67-09add1e3323c", 00:15:54.509 "strip_size_kb": 64, 00:15:54.509 "state": "offline", 00:15:54.509 "raid_level": "concat", 00:15:54.509 "superblock": false, 00:15:54.509 "num_base_bdevs": 2, 00:15:54.509 "num_base_bdevs_discovered": 1, 00:15:54.509 "num_base_bdevs_operational": 1, 00:15:54.509 "base_bdevs_list": [ 00:15:54.509 { 00:15:54.509 "name": null, 00:15:54.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.509 "is_configured": false, 00:15:54.509 "data_offset": 0, 00:15:54.509 "data_size": 65536 00:15:54.509 }, 00:15:54.509 { 00:15:54.509 "name": "BaseBdev2", 00:15:54.509 "uuid": "ba551e1c-60c5-4856-b0ed-b98a698017f9", 00:15:54.509 "is_configured": true, 00:15:54.509 "data_offset": 0, 00:15:54.509 "data_size": 65536 00:15:54.509 } 00:15:54.509 ] 00:15:54.509 }' 00:15:54.509 11:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.509 11:27:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.076 11:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:55.076 11:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:55.076 11:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.076 11:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:55.335 11:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:55.335 11:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:55.335 11:27:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:55.593 [2024-07-13 11:27:30.238417] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:55.593 [2024-07-13 11:27:30.238475] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:15:55.593 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:55.593 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:55.593 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.593 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 121883 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 121883 ']' 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 121883 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 121883 00:15:55.852 killing process with pid 121883 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 121883' 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 121883 00:15:55.852 11:27:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 121883 00:15:55.852 [2024-07-13 11:27:30.515673] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.852 [2024-07-13 11:27:30.515839] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.227 ************************************ 00:15:57.227 END TEST raid_state_function_test 00:15:57.227 ************************************ 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:57.227 00:15:57.227 real 0m11.808s 00:15:57.227 user 0m21.114s 00:15:57.227 sys 0m1.261s 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.227 11:27:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:57.227 11:27:31 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:57.227 11:27:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:57.227 11:27:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.227 11:27:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.227 ************************************ 00:15:57.227 START TEST raid_state_function_test_sb 00:15:57.227 ************************************ 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=122287 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 122287' 00:15:57.227 Process raid pid: 122287 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 122287 /var/tmp/spdk-raid.sock 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 122287 ']' 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:57.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.227 11:27:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.227 [2024-07-13 11:27:31.730524] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:57.227 [2024-07-13 11:27:31.730755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.227 [2024-07-13 11:27:31.906517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.486 [2024-07-13 11:27:32.162927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.745 [2024-07-13 11:27:32.356370] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.004 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:58.004 11:27:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:58.004 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:58.287 [2024-07-13 11:27:32.796817] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.287 [2024-07-13 11:27:32.796914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.287 [2024-07-13 11:27:32.796928] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.287 [2024-07-13 11:27:32.796957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.287 11:27:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.550 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:58.550 "name": "Existed_Raid", 00:15:58.550 "uuid": "11301187-4a46-4e7f-a5ca-9c8f55d934f3", 00:15:58.550 "strip_size_kb": 64, 00:15:58.550 "state": "configuring", 00:15:58.550 "raid_level": "concat", 00:15:58.550 "superblock": true, 00:15:58.550 "num_base_bdevs": 2, 00:15:58.550 "num_base_bdevs_discovered": 0, 00:15:58.550 "num_base_bdevs_operational": 2, 00:15:58.550 "base_bdevs_list": [ 00:15:58.550 { 00:15:58.550 "name": "BaseBdev1", 00:15:58.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.550 "is_configured": false, 00:15:58.550 "data_offset": 0, 00:15:58.550 "data_size": 0 00:15:58.550 }, 00:15:58.550 { 00:15:58.550 "name": "BaseBdev2", 00:15:58.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.550 "is_configured": false, 00:15:58.550 "data_offset": 0, 00:15:58.550 "data_size": 0 00:15:58.550 } 00:15:58.550 ] 00:15:58.550 }' 00:15:58.550 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:58.550 11:27:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.117 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.376 [2024-07-13 11:27:33.893909] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.376 [2024-07-13 11:27:33.893951] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:59.376 11:27:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:59.635 [2024-07-13 11:27:34.157976] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.635 [2024-07-13 11:27:34.158032] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.635 [2024-07-13 11:27:34.158044] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.635 [2024-07-13 11:27:34.158069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.635 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:59.893 [2024-07-13 11:27:34.388476] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.893 BaseBdev1 00:15:59.893 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:59.893 11:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:59.893 11:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:59.893 11:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:59.893 11:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:59.893 11:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:59.893 11:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:59.893 11:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.152 [ 00:16:00.152 { 00:16:00.152 "name": "BaseBdev1", 00:16:00.152 "aliases": [ 00:16:00.152 "7a297999-e78b-4cc3-9eff-eb88253a1081" 00:16:00.152 ], 00:16:00.152 "product_name": "Malloc disk", 00:16:00.152 "block_size": 512, 00:16:00.152 "num_blocks": 65536, 00:16:00.152 "uuid": "7a297999-e78b-4cc3-9eff-eb88253a1081", 00:16:00.152 "assigned_rate_limits": { 00:16:00.152 "rw_ios_per_sec": 0, 00:16:00.152 "rw_mbytes_per_sec": 0, 00:16:00.152 "r_mbytes_per_sec": 0, 00:16:00.152 "w_mbytes_per_sec": 0 00:16:00.152 }, 00:16:00.152 "claimed": true, 00:16:00.152 "claim_type": "exclusive_write", 00:16:00.152 "zoned": false, 00:16:00.152 "supported_io_types": { 00:16:00.152 "read": true, 00:16:00.152 "write": true, 00:16:00.152 "unmap": true, 00:16:00.152 "flush": true, 00:16:00.152 "reset": true, 00:16:00.152 "nvme_admin": false, 00:16:00.152 "nvme_io": false, 00:16:00.152 "nvme_io_md": false, 00:16:00.152 "write_zeroes": true, 00:16:00.152 "zcopy": true, 00:16:00.152 "get_zone_info": false, 00:16:00.152 "zone_management": false, 00:16:00.152 "zone_append": false, 00:16:00.152 "compare": false, 00:16:00.152 "compare_and_write": false, 00:16:00.152 "abort": true, 00:16:00.152 "seek_hole": false, 00:16:00.152 "seek_data": false, 00:16:00.152 "copy": true, 00:16:00.152 "nvme_iov_md": false 00:16:00.152 }, 00:16:00.152 "memory_domains": [ 00:16:00.152 { 00:16:00.152 "dma_device_id": "system", 00:16:00.152 "dma_device_type": 1 00:16:00.152 }, 00:16:00.152 { 00:16:00.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.152 "dma_device_type": 2 00:16:00.152 } 00:16:00.152 ], 00:16:00.152 "driver_specific": {} 00:16:00.152 } 00:16:00.152 ] 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.152 11:27:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.410 11:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.410 "name": "Existed_Raid", 00:16:00.410 "uuid": "b6aa2acb-9326-45b6-80e5-1dcf6da7b08f", 00:16:00.410 "strip_size_kb": 64, 00:16:00.410 "state": "configuring", 00:16:00.410 "raid_level": "concat", 00:16:00.410 "superblock": true, 00:16:00.410 "num_base_bdevs": 2, 00:16:00.410 "num_base_bdevs_discovered": 1, 00:16:00.410 "num_base_bdevs_operational": 2, 00:16:00.410 "base_bdevs_list": [ 00:16:00.410 { 00:16:00.410 "name": "BaseBdev1", 00:16:00.410 "uuid": "7a297999-e78b-4cc3-9eff-eb88253a1081", 00:16:00.410 "is_configured": true, 00:16:00.410 "data_offset": 2048, 00:16:00.410 "data_size": 63488 00:16:00.410 }, 00:16:00.410 { 00:16:00.410 "name": "BaseBdev2", 00:16:00.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.410 "is_configured": false, 00:16:00.410 "data_offset": 0, 00:16:00.410 "data_size": 0 00:16:00.410 } 00:16:00.410 ] 00:16:00.410 }' 00:16:00.410 11:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.410 11:27:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.345 11:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:01.345 [2024-07-13 11:27:35.904791] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.345 [2024-07-13 11:27:35.904827] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:16:01.345 11:27:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:01.604 [2024-07-13 11:27:36.092874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.604 [2024-07-13 11:27:36.094934] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.604 [2024-07-13 11:27:36.095004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.604 "name": "Existed_Raid", 00:16:01.604 "uuid": "6f14d96a-ec5b-4f5e-9fcf-848eb54941a4", 00:16:01.604 "strip_size_kb": 64, 00:16:01.604 "state": "configuring", 00:16:01.604 "raid_level": "concat", 00:16:01.604 "superblock": true, 00:16:01.604 "num_base_bdevs": 2, 00:16:01.604 "num_base_bdevs_discovered": 1, 00:16:01.604 "num_base_bdevs_operational": 2, 00:16:01.604 "base_bdevs_list": [ 00:16:01.604 { 00:16:01.604 "name": "BaseBdev1", 00:16:01.604 "uuid": "7a297999-e78b-4cc3-9eff-eb88253a1081", 00:16:01.604 "is_configured": true, 00:16:01.604 "data_offset": 2048, 00:16:01.604 "data_size": 63488 00:16:01.604 }, 00:16:01.604 { 00:16:01.604 "name": "BaseBdev2", 00:16:01.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.604 "is_configured": false, 00:16:01.604 "data_offset": 0, 00:16:01.604 "data_size": 0 00:16:01.604 } 00:16:01.604 ] 00:16:01.604 }' 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.604 11:27:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.537 11:27:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:02.537 [2024-07-13 11:27:37.169576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.537 [2024-07-13 11:27:37.169808] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:02.538 [2024-07-13 11:27:37.169822] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:02.538 [2024-07-13 11:27:37.169932] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:02.538 BaseBdev2 00:16:02.538 [2024-07-13 11:27:37.170258] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:02.538 [2024-07-13 11:27:37.170282] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:02.538 [2024-07-13 11:27:37.170413] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.538 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:02.538 11:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:02.538 11:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:02.538 11:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:02.538 11:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:02.538 11:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:02.538 11:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.795 11:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.053 [ 00:16:03.053 { 00:16:03.053 "name": "BaseBdev2", 00:16:03.053 "aliases": [ 00:16:03.053 "14d8257b-eca0-40b1-8c88-6e995d5a8f0e" 00:16:03.053 ], 00:16:03.053 "product_name": "Malloc disk", 00:16:03.053 "block_size": 512, 00:16:03.053 "num_blocks": 65536, 00:16:03.053 "uuid": "14d8257b-eca0-40b1-8c88-6e995d5a8f0e", 00:16:03.053 "assigned_rate_limits": { 00:16:03.053 "rw_ios_per_sec": 0, 00:16:03.053 "rw_mbytes_per_sec": 0, 00:16:03.053 "r_mbytes_per_sec": 0, 00:16:03.053 "w_mbytes_per_sec": 0 00:16:03.053 }, 00:16:03.053 "claimed": true, 00:16:03.053 "claim_type": "exclusive_write", 00:16:03.053 "zoned": false, 00:16:03.053 "supported_io_types": { 00:16:03.053 "read": true, 00:16:03.053 "write": true, 00:16:03.053 "unmap": true, 00:16:03.053 "flush": true, 00:16:03.053 "reset": true, 00:16:03.054 "nvme_admin": false, 00:16:03.054 "nvme_io": false, 00:16:03.054 "nvme_io_md": false, 00:16:03.054 "write_zeroes": true, 00:16:03.054 "zcopy": true, 00:16:03.054 "get_zone_info": false, 00:16:03.054 "zone_management": false, 00:16:03.054 "zone_append": false, 00:16:03.054 "compare": false, 00:16:03.054 "compare_and_write": false, 00:16:03.054 "abort": true, 00:16:03.054 "seek_hole": false, 00:16:03.054 "seek_data": false, 00:16:03.054 "copy": true, 00:16:03.054 "nvme_iov_md": false 00:16:03.054 }, 00:16:03.054 "memory_domains": [ 00:16:03.054 { 00:16:03.054 "dma_device_id": "system", 00:16:03.054 "dma_device_type": 1 00:16:03.054 }, 00:16:03.054 { 00:16:03.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.054 "dma_device_type": 2 00:16:03.054 } 00:16:03.054 ], 00:16:03.054 "driver_specific": {} 00:16:03.054 } 00:16:03.054 ] 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.054 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.312 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.312 "name": "Existed_Raid", 00:16:03.312 "uuid": "6f14d96a-ec5b-4f5e-9fcf-848eb54941a4", 00:16:03.312 "strip_size_kb": 64, 00:16:03.312 "state": "online", 00:16:03.312 "raid_level": "concat", 00:16:03.312 "superblock": true, 00:16:03.312 "num_base_bdevs": 2, 00:16:03.312 "num_base_bdevs_discovered": 2, 00:16:03.312 "num_base_bdevs_operational": 2, 00:16:03.312 "base_bdevs_list": [ 00:16:03.312 { 00:16:03.312 "name": "BaseBdev1", 00:16:03.312 "uuid": "7a297999-e78b-4cc3-9eff-eb88253a1081", 00:16:03.312 "is_configured": true, 00:16:03.312 "data_offset": 2048, 00:16:03.312 "data_size": 63488 00:16:03.312 }, 00:16:03.312 { 00:16:03.312 "name": "BaseBdev2", 00:16:03.312 "uuid": "14d8257b-eca0-40b1-8c88-6e995d5a8f0e", 00:16:03.312 "is_configured": true, 00:16:03.312 "data_offset": 2048, 00:16:03.312 "data_size": 63488 00:16:03.312 } 00:16:03.312 ] 00:16:03.312 }' 00:16:03.312 11:27:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.312 11:27:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.877 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:03.877 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:03.877 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:03.877 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:03.877 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:03.877 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:03.877 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:03.877 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:04.135 [2024-07-13 11:27:38.718109] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.135 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:04.135 "name": "Existed_Raid", 00:16:04.135 "aliases": [ 00:16:04.135 "6f14d96a-ec5b-4f5e-9fcf-848eb54941a4" 00:16:04.135 ], 00:16:04.135 "product_name": "Raid Volume", 00:16:04.135 "block_size": 512, 00:16:04.135 "num_blocks": 126976, 00:16:04.135 "uuid": "6f14d96a-ec5b-4f5e-9fcf-848eb54941a4", 00:16:04.135 "assigned_rate_limits": { 00:16:04.135 "rw_ios_per_sec": 0, 00:16:04.135 "rw_mbytes_per_sec": 0, 00:16:04.135 "r_mbytes_per_sec": 0, 00:16:04.135 "w_mbytes_per_sec": 0 00:16:04.135 }, 00:16:04.135 "claimed": false, 00:16:04.135 "zoned": false, 00:16:04.135 "supported_io_types": { 00:16:04.135 "read": true, 00:16:04.135 "write": true, 00:16:04.135 "unmap": true, 00:16:04.135 "flush": true, 00:16:04.135 "reset": true, 00:16:04.135 "nvme_admin": false, 00:16:04.135 "nvme_io": false, 00:16:04.135 "nvme_io_md": false, 00:16:04.135 "write_zeroes": true, 00:16:04.135 "zcopy": false, 00:16:04.135 "get_zone_info": false, 00:16:04.135 "zone_management": false, 00:16:04.135 "zone_append": false, 00:16:04.135 "compare": false, 00:16:04.135 "compare_and_write": false, 00:16:04.135 "abort": false, 00:16:04.135 "seek_hole": false, 00:16:04.135 "seek_data": false, 00:16:04.135 "copy": false, 00:16:04.135 "nvme_iov_md": false 00:16:04.135 }, 00:16:04.135 "memory_domains": [ 00:16:04.135 { 00:16:04.135 "dma_device_id": "system", 00:16:04.135 "dma_device_type": 1 00:16:04.135 }, 00:16:04.135 { 00:16:04.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.135 "dma_device_type": 2 00:16:04.135 }, 00:16:04.135 { 00:16:04.135 "dma_device_id": "system", 00:16:04.135 "dma_device_type": 1 00:16:04.135 }, 00:16:04.135 { 00:16:04.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.135 "dma_device_type": 2 00:16:04.135 } 00:16:04.135 ], 00:16:04.135 "driver_specific": { 00:16:04.135 "raid": { 00:16:04.135 "uuid": "6f14d96a-ec5b-4f5e-9fcf-848eb54941a4", 00:16:04.135 "strip_size_kb": 64, 00:16:04.135 "state": "online", 00:16:04.135 "raid_level": "concat", 00:16:04.135 "superblock": true, 00:16:04.135 "num_base_bdevs": 2, 00:16:04.135 "num_base_bdevs_discovered": 2, 00:16:04.135 "num_base_bdevs_operational": 2, 00:16:04.135 "base_bdevs_list": [ 00:16:04.135 { 00:16:04.135 "name": "BaseBdev1", 00:16:04.135 "uuid": "7a297999-e78b-4cc3-9eff-eb88253a1081", 00:16:04.135 "is_configured": true, 00:16:04.135 "data_offset": 2048, 00:16:04.135 "data_size": 63488 00:16:04.135 }, 00:16:04.135 { 00:16:04.135 "name": "BaseBdev2", 00:16:04.135 "uuid": "14d8257b-eca0-40b1-8c88-6e995d5a8f0e", 00:16:04.135 "is_configured": true, 00:16:04.135 "data_offset": 2048, 00:16:04.135 "data_size": 63488 00:16:04.135 } 00:16:04.135 ] 00:16:04.135 } 00:16:04.135 } 00:16:04.135 }' 00:16:04.135 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.135 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:04.135 BaseBdev2' 00:16:04.135 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:04.135 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:04.135 11:27:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:04.392 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:04.392 "name": "BaseBdev1", 00:16:04.392 "aliases": [ 00:16:04.392 "7a297999-e78b-4cc3-9eff-eb88253a1081" 00:16:04.392 ], 00:16:04.392 "product_name": "Malloc disk", 00:16:04.392 "block_size": 512, 00:16:04.392 "num_blocks": 65536, 00:16:04.392 "uuid": "7a297999-e78b-4cc3-9eff-eb88253a1081", 00:16:04.392 "assigned_rate_limits": { 00:16:04.392 "rw_ios_per_sec": 0, 00:16:04.392 "rw_mbytes_per_sec": 0, 00:16:04.392 "r_mbytes_per_sec": 0, 00:16:04.392 "w_mbytes_per_sec": 0 00:16:04.392 }, 00:16:04.392 "claimed": true, 00:16:04.392 "claim_type": "exclusive_write", 00:16:04.392 "zoned": false, 00:16:04.392 "supported_io_types": { 00:16:04.392 "read": true, 00:16:04.392 "write": true, 00:16:04.392 "unmap": true, 00:16:04.392 "flush": true, 00:16:04.392 "reset": true, 00:16:04.392 "nvme_admin": false, 00:16:04.392 "nvme_io": false, 00:16:04.392 "nvme_io_md": false, 00:16:04.392 "write_zeroes": true, 00:16:04.392 "zcopy": true, 00:16:04.392 "get_zone_info": false, 00:16:04.392 "zone_management": false, 00:16:04.392 "zone_append": false, 00:16:04.392 "compare": false, 00:16:04.392 "compare_and_write": false, 00:16:04.392 "abort": true, 00:16:04.392 "seek_hole": false, 00:16:04.392 "seek_data": false, 00:16:04.392 "copy": true, 00:16:04.392 "nvme_iov_md": false 00:16:04.392 }, 00:16:04.392 "memory_domains": [ 00:16:04.392 { 00:16:04.392 "dma_device_id": "system", 00:16:04.392 "dma_device_type": 1 00:16:04.392 }, 00:16:04.392 { 00:16:04.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.392 "dma_device_type": 2 00:16:04.392 } 00:16:04.392 ], 00:16:04.392 "driver_specific": {} 00:16:04.392 }' 00:16:04.392 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.392 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.392 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:04.392 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.649 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.649 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:04.649 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.649 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.649 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:04.649 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.906 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:04.906 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:04.906 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:04.906 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:04.906 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:05.163 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:05.163 "name": "BaseBdev2", 00:16:05.163 "aliases": [ 00:16:05.163 "14d8257b-eca0-40b1-8c88-6e995d5a8f0e" 00:16:05.163 ], 00:16:05.163 "product_name": "Malloc disk", 00:16:05.163 "block_size": 512, 00:16:05.163 "num_blocks": 65536, 00:16:05.163 "uuid": "14d8257b-eca0-40b1-8c88-6e995d5a8f0e", 00:16:05.163 "assigned_rate_limits": { 00:16:05.163 "rw_ios_per_sec": 0, 00:16:05.163 "rw_mbytes_per_sec": 0, 00:16:05.163 "r_mbytes_per_sec": 0, 00:16:05.163 "w_mbytes_per_sec": 0 00:16:05.163 }, 00:16:05.163 "claimed": true, 00:16:05.163 "claim_type": "exclusive_write", 00:16:05.163 "zoned": false, 00:16:05.163 "supported_io_types": { 00:16:05.163 "read": true, 00:16:05.163 "write": true, 00:16:05.163 "unmap": true, 00:16:05.163 "flush": true, 00:16:05.163 "reset": true, 00:16:05.163 "nvme_admin": false, 00:16:05.163 "nvme_io": false, 00:16:05.163 "nvme_io_md": false, 00:16:05.163 "write_zeroes": true, 00:16:05.163 "zcopy": true, 00:16:05.163 "get_zone_info": false, 00:16:05.163 "zone_management": false, 00:16:05.163 "zone_append": false, 00:16:05.163 "compare": false, 00:16:05.163 "compare_and_write": false, 00:16:05.163 "abort": true, 00:16:05.163 "seek_hole": false, 00:16:05.163 "seek_data": false, 00:16:05.163 "copy": true, 00:16:05.163 "nvme_iov_md": false 00:16:05.163 }, 00:16:05.163 "memory_domains": [ 00:16:05.163 { 00:16:05.163 "dma_device_id": "system", 00:16:05.163 "dma_device_type": 1 00:16:05.163 }, 00:16:05.163 { 00:16:05.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.163 "dma_device_type": 2 00:16:05.163 } 00:16:05.163 ], 00:16:05.163 "driver_specific": {} 00:16:05.163 }' 00:16:05.163 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.163 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.163 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.163 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.163 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.420 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.420 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.420 11:27:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.420 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.420 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.420 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.420 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.420 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:05.677 [2024-07-13 11:27:40.330399] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.677 [2024-07-13 11:27:40.330435] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.677 [2024-07-13 11:27:40.330493] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.934 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.191 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.191 "name": "Existed_Raid", 00:16:06.191 "uuid": "6f14d96a-ec5b-4f5e-9fcf-848eb54941a4", 00:16:06.191 "strip_size_kb": 64, 00:16:06.191 "state": "offline", 00:16:06.191 "raid_level": "concat", 00:16:06.191 "superblock": true, 00:16:06.191 "num_base_bdevs": 2, 00:16:06.191 "num_base_bdevs_discovered": 1, 00:16:06.191 "num_base_bdevs_operational": 1, 00:16:06.191 "base_bdevs_list": [ 00:16:06.191 { 00:16:06.191 "name": null, 00:16:06.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.191 "is_configured": false, 00:16:06.191 "data_offset": 2048, 00:16:06.191 "data_size": 63488 00:16:06.191 }, 00:16:06.191 { 00:16:06.191 "name": "BaseBdev2", 00:16:06.191 "uuid": "14d8257b-eca0-40b1-8c88-6e995d5a8f0e", 00:16:06.191 "is_configured": true, 00:16:06.191 "data_offset": 2048, 00:16:06.191 "data_size": 63488 00:16:06.191 } 00:16:06.191 ] 00:16:06.191 }' 00:16:06.191 11:27:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.191 11:27:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.756 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:06.756 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:06.756 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.756 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:07.013 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:07.013 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:07.013 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:07.270 [2024-07-13 11:27:41.826355] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:07.270 [2024-07-13 11:27:41.826439] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:07.270 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:07.270 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:07.271 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:07.271 11:27:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 122287 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 122287 ']' 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 122287 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122287 00:16:07.528 killing process with pid 122287 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122287' 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 122287 00:16:07.528 11:27:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 122287 00:16:07.528 [2024-07-13 11:27:42.190676] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.528 [2024-07-13 11:27:42.190807] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.472 ************************************ 00:16:08.472 END TEST raid_state_function_test_sb 00:16:08.472 ************************************ 00:16:08.472 11:27:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:08.472 00:16:08.472 real 0m11.454s 00:16:08.472 user 0m20.437s 00:16:08.472 sys 0m1.362s 00:16:08.472 11:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:08.472 11:27:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.472 11:27:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:08.472 11:27:43 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:16:08.472 11:27:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:08.472 11:27:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:08.472 11:27:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.472 ************************************ 00:16:08.472 START TEST raid_superblock_test 00:16:08.472 ************************************ 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=122682 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 122682 /var/tmp/spdk-raid.sock 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 122682 ']' 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.472 11:27:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.731 [2024-07-13 11:27:43.245860] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:08.731 [2024-07-13 11:27:43.246073] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122682 ] 00:16:08.731 [2024-07-13 11:27:43.421663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.989 [2024-07-13 11:27:43.667123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.247 [2024-07-13 11:27:43.853194] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:09.505 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:09.763 malloc1 00:16:09.763 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:10.023 [2024-07-13 11:27:44.664551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:10.023 [2024-07-13 11:27:44.664792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.023 [2024-07-13 11:27:44.664979] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:10.023 [2024-07-13 11:27:44.665145] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.023 [2024-07-13 11:27:44.667301] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.023 [2024-07-13 11:27:44.667480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:10.023 pt1 00:16:10.023 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:10.023 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:10.023 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:10.023 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:10.023 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:10.023 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:10.023 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:10.023 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:10.023 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:10.282 malloc2 00:16:10.282 11:27:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:10.540 [2024-07-13 11:27:45.101578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:10.540 [2024-07-13 11:27:45.101813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.540 [2024-07-13 11:27:45.101963] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:16:10.540 [2024-07-13 11:27:45.102087] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.540 [2024-07-13 11:27:45.104503] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.540 [2024-07-13 11:27:45.104714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:10.540 pt2 00:16:10.540 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:10.540 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:10.540 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:16:10.540 [2024-07-13 11:27:45.285657] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:10.540 [2024-07-13 11:27:45.287614] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.800 [2024-07-13 11:27:45.287933] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:16:10.800 [2024-07-13 11:27:45.288058] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:10.800 [2024-07-13 11:27:45.288216] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:10.800 [2024-07-13 11:27:45.288632] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:16:10.800 [2024-07-13 11:27:45.288747] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:16:10.800 [2024-07-13 11:27:45.288977] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.800 "name": "raid_bdev1", 00:16:10.800 "uuid": "d9793aa8-cb79-4235-a908-d27bcebbf966", 00:16:10.800 "strip_size_kb": 64, 00:16:10.800 "state": "online", 00:16:10.800 "raid_level": "concat", 00:16:10.800 "superblock": true, 00:16:10.800 "num_base_bdevs": 2, 00:16:10.800 "num_base_bdevs_discovered": 2, 00:16:10.800 "num_base_bdevs_operational": 2, 00:16:10.800 "base_bdevs_list": [ 00:16:10.800 { 00:16:10.800 "name": "pt1", 00:16:10.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:10.800 "is_configured": true, 00:16:10.800 "data_offset": 2048, 00:16:10.800 "data_size": 63488 00:16:10.800 }, 00:16:10.800 { 00:16:10.800 "name": "pt2", 00:16:10.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.800 "is_configured": true, 00:16:10.800 "data_offset": 2048, 00:16:10.800 "data_size": 63488 00:16:10.800 } 00:16:10.800 ] 00:16:10.800 }' 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.800 11:27:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:11.736 [2024-07-13 11:27:46.410120] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:11.736 "name": "raid_bdev1", 00:16:11.736 "aliases": [ 00:16:11.736 "d9793aa8-cb79-4235-a908-d27bcebbf966" 00:16:11.736 ], 00:16:11.736 "product_name": "Raid Volume", 00:16:11.736 "block_size": 512, 00:16:11.736 "num_blocks": 126976, 00:16:11.736 "uuid": "d9793aa8-cb79-4235-a908-d27bcebbf966", 00:16:11.736 "assigned_rate_limits": { 00:16:11.736 "rw_ios_per_sec": 0, 00:16:11.736 "rw_mbytes_per_sec": 0, 00:16:11.736 "r_mbytes_per_sec": 0, 00:16:11.736 "w_mbytes_per_sec": 0 00:16:11.736 }, 00:16:11.736 "claimed": false, 00:16:11.736 "zoned": false, 00:16:11.736 "supported_io_types": { 00:16:11.736 "read": true, 00:16:11.736 "write": true, 00:16:11.736 "unmap": true, 00:16:11.736 "flush": true, 00:16:11.736 "reset": true, 00:16:11.736 "nvme_admin": false, 00:16:11.736 "nvme_io": false, 00:16:11.736 "nvme_io_md": false, 00:16:11.736 "write_zeroes": true, 00:16:11.736 "zcopy": false, 00:16:11.736 "get_zone_info": false, 00:16:11.736 "zone_management": false, 00:16:11.736 "zone_append": false, 00:16:11.736 "compare": false, 00:16:11.736 "compare_and_write": false, 00:16:11.736 "abort": false, 00:16:11.736 "seek_hole": false, 00:16:11.736 "seek_data": false, 00:16:11.736 "copy": false, 00:16:11.736 "nvme_iov_md": false 00:16:11.736 }, 00:16:11.736 "memory_domains": [ 00:16:11.736 { 00:16:11.736 "dma_device_id": "system", 00:16:11.736 "dma_device_type": 1 00:16:11.736 }, 00:16:11.736 { 00:16:11.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.736 "dma_device_type": 2 00:16:11.736 }, 00:16:11.736 { 00:16:11.736 "dma_device_id": "system", 00:16:11.736 "dma_device_type": 1 00:16:11.736 }, 00:16:11.736 { 00:16:11.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.736 "dma_device_type": 2 00:16:11.736 } 00:16:11.736 ], 00:16:11.736 "driver_specific": { 00:16:11.736 "raid": { 00:16:11.736 "uuid": "d9793aa8-cb79-4235-a908-d27bcebbf966", 00:16:11.736 "strip_size_kb": 64, 00:16:11.736 "state": "online", 00:16:11.736 "raid_level": "concat", 00:16:11.736 "superblock": true, 00:16:11.736 "num_base_bdevs": 2, 00:16:11.736 "num_base_bdevs_discovered": 2, 00:16:11.736 "num_base_bdevs_operational": 2, 00:16:11.736 "base_bdevs_list": [ 00:16:11.736 { 00:16:11.736 "name": "pt1", 00:16:11.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:11.736 "is_configured": true, 00:16:11.736 "data_offset": 2048, 00:16:11.736 "data_size": 63488 00:16:11.736 }, 00:16:11.736 { 00:16:11.736 "name": "pt2", 00:16:11.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:11.736 "is_configured": true, 00:16:11.736 "data_offset": 2048, 00:16:11.736 "data_size": 63488 00:16:11.736 } 00:16:11.736 ] 00:16:11.736 } 00:16:11.736 } 00:16:11.736 }' 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:11.736 pt2' 00:16:11.736 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.993 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:11.993 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.993 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:11.993 "name": "pt1", 00:16:11.993 "aliases": [ 00:16:11.993 "00000000-0000-0000-0000-000000000001" 00:16:11.993 ], 00:16:11.993 "product_name": "passthru", 00:16:11.993 "block_size": 512, 00:16:11.993 "num_blocks": 65536, 00:16:11.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:11.993 "assigned_rate_limits": { 00:16:11.993 "rw_ios_per_sec": 0, 00:16:11.993 "rw_mbytes_per_sec": 0, 00:16:11.993 "r_mbytes_per_sec": 0, 00:16:11.993 "w_mbytes_per_sec": 0 00:16:11.993 }, 00:16:11.993 "claimed": true, 00:16:11.993 "claim_type": "exclusive_write", 00:16:11.993 "zoned": false, 00:16:11.993 "supported_io_types": { 00:16:11.993 "read": true, 00:16:11.993 "write": true, 00:16:11.993 "unmap": true, 00:16:11.993 "flush": true, 00:16:11.993 "reset": true, 00:16:11.993 "nvme_admin": false, 00:16:11.993 "nvme_io": false, 00:16:11.993 "nvme_io_md": false, 00:16:11.993 "write_zeroes": true, 00:16:11.993 "zcopy": true, 00:16:11.993 "get_zone_info": false, 00:16:11.993 "zone_management": false, 00:16:11.993 "zone_append": false, 00:16:11.993 "compare": false, 00:16:11.993 "compare_and_write": false, 00:16:11.993 "abort": true, 00:16:11.993 "seek_hole": false, 00:16:11.993 "seek_data": false, 00:16:11.993 "copy": true, 00:16:11.993 "nvme_iov_md": false 00:16:11.993 }, 00:16:11.993 "memory_domains": [ 00:16:11.993 { 00:16:11.993 "dma_device_id": "system", 00:16:11.993 "dma_device_type": 1 00:16:11.993 }, 00:16:11.993 { 00:16:11.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.993 "dma_device_type": 2 00:16:11.993 } 00:16:11.993 ], 00:16:11.993 "driver_specific": { 00:16:11.993 "passthru": { 00:16:11.993 "name": "pt1", 00:16:11.993 "base_bdev_name": "malloc1" 00:16:11.993 } 00:16:11.993 } 00:16:11.993 }' 00:16:11.993 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.257 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.257 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.257 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.257 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.257 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.257 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.257 11:27:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.541 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.541 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.541 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.541 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.541 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.541 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:12.541 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.819 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.819 "name": "pt2", 00:16:12.819 "aliases": [ 00:16:12.819 "00000000-0000-0000-0000-000000000002" 00:16:12.819 ], 00:16:12.819 "product_name": "passthru", 00:16:12.819 "block_size": 512, 00:16:12.819 "num_blocks": 65536, 00:16:12.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:12.819 "assigned_rate_limits": { 00:16:12.819 "rw_ios_per_sec": 0, 00:16:12.819 "rw_mbytes_per_sec": 0, 00:16:12.819 "r_mbytes_per_sec": 0, 00:16:12.819 "w_mbytes_per_sec": 0 00:16:12.819 }, 00:16:12.819 "claimed": true, 00:16:12.819 "claim_type": "exclusive_write", 00:16:12.819 "zoned": false, 00:16:12.819 "supported_io_types": { 00:16:12.819 "read": true, 00:16:12.819 "write": true, 00:16:12.819 "unmap": true, 00:16:12.819 "flush": true, 00:16:12.819 "reset": true, 00:16:12.819 "nvme_admin": false, 00:16:12.819 "nvme_io": false, 00:16:12.819 "nvme_io_md": false, 00:16:12.819 "write_zeroes": true, 00:16:12.819 "zcopy": true, 00:16:12.819 "get_zone_info": false, 00:16:12.819 "zone_management": false, 00:16:12.819 "zone_append": false, 00:16:12.819 "compare": false, 00:16:12.819 "compare_and_write": false, 00:16:12.819 "abort": true, 00:16:12.819 "seek_hole": false, 00:16:12.819 "seek_data": false, 00:16:12.819 "copy": true, 00:16:12.819 "nvme_iov_md": false 00:16:12.819 }, 00:16:12.819 "memory_domains": [ 00:16:12.819 { 00:16:12.819 "dma_device_id": "system", 00:16:12.819 "dma_device_type": 1 00:16:12.819 }, 00:16:12.819 { 00:16:12.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.819 "dma_device_type": 2 00:16:12.819 } 00:16:12.819 ], 00:16:12.819 "driver_specific": { 00:16:12.819 "passthru": { 00:16:12.819 "name": "pt2", 00:16:12.819 "base_bdev_name": "malloc2" 00:16:12.819 } 00:16:12.819 } 00:16:12.819 }' 00:16:12.819 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.819 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.819 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.819 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.087 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.087 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:13.087 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.087 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.087 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:13.087 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.087 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.345 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:13.345 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:13.345 11:27:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:13.603 [2024-07-13 11:27:48.102587] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:13.603 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d9793aa8-cb79-4235-a908-d27bcebbf966 00:16:13.603 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d9793aa8-cb79-4235-a908-d27bcebbf966 ']' 00:16:13.603 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:13.603 [2024-07-13 11:27:48.290406] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.603 [2024-07-13 11:27:48.290539] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.603 [2024-07-13 11:27:48.290754] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.603 [2024-07-13 11:27:48.290917] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.603 [2024-07-13 11:27:48.291031] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:16:13.603 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.603 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:13.862 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:13.862 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:13.862 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:13.862 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:14.121 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:14.121 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:14.380 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:14.380 11:27:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:14.380 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:16:14.638 [2024-07-13 11:27:49.334624] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:14.638 [2024-07-13 11:27:49.336600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:14.638 [2024-07-13 11:27:49.336675] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:14.638 [2024-07-13 11:27:49.336786] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:14.638 [2024-07-13 11:27:49.336829] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:14.638 [2024-07-13 11:27:49.336856] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:16:14.638 request: 00:16:14.638 { 00:16:14.638 "name": "raid_bdev1", 00:16:14.638 "raid_level": "concat", 00:16:14.638 "base_bdevs": [ 00:16:14.638 "malloc1", 00:16:14.638 "malloc2" 00:16:14.638 ], 00:16:14.638 "strip_size_kb": 64, 00:16:14.638 "superblock": false, 00:16:14.638 "method": "bdev_raid_create", 00:16:14.638 "req_id": 1 00:16:14.638 } 00:16:14.638 Got JSON-RPC error response 00:16:14.638 response: 00:16:14.638 { 00:16:14.638 "code": -17, 00:16:14.638 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:14.638 } 00:16:14.638 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:14.638 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:14.638 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:14.638 11:27:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:14.638 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.638 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:14.897 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:14.897 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:14.897 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:15.155 [2024-07-13 11:27:49.730710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:15.155 [2024-07-13 11:27:49.730796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.155 [2024-07-13 11:27:49.730828] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:15.155 [2024-07-13 11:27:49.730865] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.155 [2024-07-13 11:27:49.733235] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.155 [2024-07-13 11:27:49.733301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:15.155 [2024-07-13 11:27:49.733412] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:15.155 [2024-07-13 11:27:49.733463] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:15.155 pt1 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.155 11:27:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.414 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.414 "name": "raid_bdev1", 00:16:15.414 "uuid": "d9793aa8-cb79-4235-a908-d27bcebbf966", 00:16:15.414 "strip_size_kb": 64, 00:16:15.414 "state": "configuring", 00:16:15.414 "raid_level": "concat", 00:16:15.414 "superblock": true, 00:16:15.414 "num_base_bdevs": 2, 00:16:15.414 "num_base_bdevs_discovered": 1, 00:16:15.414 "num_base_bdevs_operational": 2, 00:16:15.414 "base_bdevs_list": [ 00:16:15.414 { 00:16:15.414 "name": "pt1", 00:16:15.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:15.414 "is_configured": true, 00:16:15.414 "data_offset": 2048, 00:16:15.414 "data_size": 63488 00:16:15.414 }, 00:16:15.414 { 00:16:15.414 "name": null, 00:16:15.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:15.414 "is_configured": false, 00:16:15.414 "data_offset": 2048, 00:16:15.414 "data_size": 63488 00:16:15.414 } 00:16:15.414 ] 00:16:15.414 }' 00:16:15.414 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.414 11:27:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.982 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:15.982 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:15.982 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:15.982 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:16.240 [2024-07-13 11:27:50.858938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:16.240 [2024-07-13 11:27:50.859008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.240 [2024-07-13 11:27:50.859037] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:16.240 [2024-07-13 11:27:50.859060] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.240 [2024-07-13 11:27:50.859519] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.240 [2024-07-13 11:27:50.859572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:16.240 [2024-07-13 11:27:50.859658] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:16.240 [2024-07-13 11:27:50.859689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.240 [2024-07-13 11:27:50.859805] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:16:16.240 [2024-07-13 11:27:50.859825] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:16.240 [2024-07-13 11:27:50.859923] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:16.240 [2024-07-13 11:27:50.860207] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:16:16.240 [2024-07-13 11:27:50.860229] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:16:16.240 [2024-07-13 11:27:50.860352] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.240 pt2 00:16:16.240 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:16.240 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:16.240 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:16.240 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:16.240 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.240 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:16.240 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:16.241 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:16.241 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.241 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.241 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.241 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.241 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.241 11:27:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.499 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.499 "name": "raid_bdev1", 00:16:16.499 "uuid": "d9793aa8-cb79-4235-a908-d27bcebbf966", 00:16:16.499 "strip_size_kb": 64, 00:16:16.499 "state": "online", 00:16:16.499 "raid_level": "concat", 00:16:16.499 "superblock": true, 00:16:16.499 "num_base_bdevs": 2, 00:16:16.499 "num_base_bdevs_discovered": 2, 00:16:16.499 "num_base_bdevs_operational": 2, 00:16:16.499 "base_bdevs_list": [ 00:16:16.499 { 00:16:16.499 "name": "pt1", 00:16:16.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:16.499 "is_configured": true, 00:16:16.499 "data_offset": 2048, 00:16:16.499 "data_size": 63488 00:16:16.499 }, 00:16:16.499 { 00:16:16.499 "name": "pt2", 00:16:16.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:16.499 "is_configured": true, 00:16:16.499 "data_offset": 2048, 00:16:16.499 "data_size": 63488 00:16:16.499 } 00:16:16.499 ] 00:16:16.499 }' 00:16:16.499 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.499 11:27:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.066 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.066 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:17.066 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:17.066 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:17.066 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:17.066 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:17.066 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:17.066 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:17.326 [2024-07-13 11:27:51.955421] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.326 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:17.326 "name": "raid_bdev1", 00:16:17.326 "aliases": [ 00:16:17.326 "d9793aa8-cb79-4235-a908-d27bcebbf966" 00:16:17.326 ], 00:16:17.326 "product_name": "Raid Volume", 00:16:17.326 "block_size": 512, 00:16:17.326 "num_blocks": 126976, 00:16:17.326 "uuid": "d9793aa8-cb79-4235-a908-d27bcebbf966", 00:16:17.326 "assigned_rate_limits": { 00:16:17.326 "rw_ios_per_sec": 0, 00:16:17.326 "rw_mbytes_per_sec": 0, 00:16:17.326 "r_mbytes_per_sec": 0, 00:16:17.326 "w_mbytes_per_sec": 0 00:16:17.326 }, 00:16:17.326 "claimed": false, 00:16:17.326 "zoned": false, 00:16:17.326 "supported_io_types": { 00:16:17.326 "read": true, 00:16:17.326 "write": true, 00:16:17.326 "unmap": true, 00:16:17.326 "flush": true, 00:16:17.326 "reset": true, 00:16:17.326 "nvme_admin": false, 00:16:17.326 "nvme_io": false, 00:16:17.326 "nvme_io_md": false, 00:16:17.326 "write_zeroes": true, 00:16:17.326 "zcopy": false, 00:16:17.326 "get_zone_info": false, 00:16:17.326 "zone_management": false, 00:16:17.326 "zone_append": false, 00:16:17.326 "compare": false, 00:16:17.326 "compare_and_write": false, 00:16:17.326 "abort": false, 00:16:17.326 "seek_hole": false, 00:16:17.326 "seek_data": false, 00:16:17.326 "copy": false, 00:16:17.326 "nvme_iov_md": false 00:16:17.326 }, 00:16:17.326 "memory_domains": [ 00:16:17.326 { 00:16:17.326 "dma_device_id": "system", 00:16:17.326 "dma_device_type": 1 00:16:17.326 }, 00:16:17.326 { 00:16:17.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.326 "dma_device_type": 2 00:16:17.326 }, 00:16:17.326 { 00:16:17.326 "dma_device_id": "system", 00:16:17.326 "dma_device_type": 1 00:16:17.326 }, 00:16:17.326 { 00:16:17.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.326 "dma_device_type": 2 00:16:17.326 } 00:16:17.326 ], 00:16:17.326 "driver_specific": { 00:16:17.326 "raid": { 00:16:17.326 "uuid": "d9793aa8-cb79-4235-a908-d27bcebbf966", 00:16:17.326 "strip_size_kb": 64, 00:16:17.326 "state": "online", 00:16:17.326 "raid_level": "concat", 00:16:17.326 "superblock": true, 00:16:17.326 "num_base_bdevs": 2, 00:16:17.326 "num_base_bdevs_discovered": 2, 00:16:17.326 "num_base_bdevs_operational": 2, 00:16:17.326 "base_bdevs_list": [ 00:16:17.326 { 00:16:17.326 "name": "pt1", 00:16:17.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.326 "is_configured": true, 00:16:17.326 "data_offset": 2048, 00:16:17.326 "data_size": 63488 00:16:17.326 }, 00:16:17.326 { 00:16:17.326 "name": "pt2", 00:16:17.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.326 "is_configured": true, 00:16:17.326 "data_offset": 2048, 00:16:17.326 "data_size": 63488 00:16:17.326 } 00:16:17.326 ] 00:16:17.326 } 00:16:17.326 } 00:16:17.326 }' 00:16:17.326 11:27:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.326 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:17.326 pt2' 00:16:17.326 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:17.326 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:17.326 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:17.585 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:17.585 "name": "pt1", 00:16:17.585 "aliases": [ 00:16:17.585 "00000000-0000-0000-0000-000000000001" 00:16:17.585 ], 00:16:17.585 "product_name": "passthru", 00:16:17.585 "block_size": 512, 00:16:17.585 "num_blocks": 65536, 00:16:17.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.585 "assigned_rate_limits": { 00:16:17.585 "rw_ios_per_sec": 0, 00:16:17.585 "rw_mbytes_per_sec": 0, 00:16:17.585 "r_mbytes_per_sec": 0, 00:16:17.585 "w_mbytes_per_sec": 0 00:16:17.585 }, 00:16:17.585 "claimed": true, 00:16:17.585 "claim_type": "exclusive_write", 00:16:17.585 "zoned": false, 00:16:17.585 "supported_io_types": { 00:16:17.585 "read": true, 00:16:17.585 "write": true, 00:16:17.585 "unmap": true, 00:16:17.585 "flush": true, 00:16:17.585 "reset": true, 00:16:17.585 "nvme_admin": false, 00:16:17.585 "nvme_io": false, 00:16:17.585 "nvme_io_md": false, 00:16:17.585 "write_zeroes": true, 00:16:17.585 "zcopy": true, 00:16:17.585 "get_zone_info": false, 00:16:17.585 "zone_management": false, 00:16:17.585 "zone_append": false, 00:16:17.585 "compare": false, 00:16:17.585 "compare_and_write": false, 00:16:17.585 "abort": true, 00:16:17.585 "seek_hole": false, 00:16:17.585 "seek_data": false, 00:16:17.585 "copy": true, 00:16:17.585 "nvme_iov_md": false 00:16:17.585 }, 00:16:17.585 "memory_domains": [ 00:16:17.585 { 00:16:17.585 "dma_device_id": "system", 00:16:17.585 "dma_device_type": 1 00:16:17.585 }, 00:16:17.585 { 00:16:17.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.585 "dma_device_type": 2 00:16:17.585 } 00:16:17.585 ], 00:16:17.585 "driver_specific": { 00:16:17.585 "passthru": { 00:16:17.585 "name": "pt1", 00:16:17.585 "base_bdev_name": "malloc1" 00:16:17.585 } 00:16:17.585 } 00:16:17.585 }' 00:16:17.585 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.845 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.845 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:17.845 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.845 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.845 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:17.845 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.845 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.845 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:17.845 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.104 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.104 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.104 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.104 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:18.104 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.374 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.374 "name": "pt2", 00:16:18.374 "aliases": [ 00:16:18.374 "00000000-0000-0000-0000-000000000002" 00:16:18.374 ], 00:16:18.374 "product_name": "passthru", 00:16:18.374 "block_size": 512, 00:16:18.374 "num_blocks": 65536, 00:16:18.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.374 "assigned_rate_limits": { 00:16:18.374 "rw_ios_per_sec": 0, 00:16:18.374 "rw_mbytes_per_sec": 0, 00:16:18.374 "r_mbytes_per_sec": 0, 00:16:18.374 "w_mbytes_per_sec": 0 00:16:18.374 }, 00:16:18.374 "claimed": true, 00:16:18.374 "claim_type": "exclusive_write", 00:16:18.374 "zoned": false, 00:16:18.374 "supported_io_types": { 00:16:18.374 "read": true, 00:16:18.374 "write": true, 00:16:18.374 "unmap": true, 00:16:18.374 "flush": true, 00:16:18.374 "reset": true, 00:16:18.374 "nvme_admin": false, 00:16:18.374 "nvme_io": false, 00:16:18.374 "nvme_io_md": false, 00:16:18.374 "write_zeroes": true, 00:16:18.374 "zcopy": true, 00:16:18.374 "get_zone_info": false, 00:16:18.374 "zone_management": false, 00:16:18.374 "zone_append": false, 00:16:18.374 "compare": false, 00:16:18.374 "compare_and_write": false, 00:16:18.374 "abort": true, 00:16:18.374 "seek_hole": false, 00:16:18.374 "seek_data": false, 00:16:18.374 "copy": true, 00:16:18.374 "nvme_iov_md": false 00:16:18.374 }, 00:16:18.374 "memory_domains": [ 00:16:18.374 { 00:16:18.374 "dma_device_id": "system", 00:16:18.374 "dma_device_type": 1 00:16:18.374 }, 00:16:18.374 { 00:16:18.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.374 "dma_device_type": 2 00:16:18.374 } 00:16:18.374 ], 00:16:18.374 "driver_specific": { 00:16:18.374 "passthru": { 00:16:18.374 "name": "pt2", 00:16:18.374 "base_bdev_name": "malloc2" 00:16:18.374 } 00:16:18.374 } 00:16:18.374 }' 00:16:18.374 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.374 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.374 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.374 11:27:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.374 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.374 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.374 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.635 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.635 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.635 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.635 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.635 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.635 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:18.635 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:18.893 [2024-07-13 11:27:53.523709] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d9793aa8-cb79-4235-a908-d27bcebbf966 '!=' d9793aa8-cb79-4235-a908-d27bcebbf966 ']' 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 122682 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 122682 ']' 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 122682 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122682 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122682' 00:16:18.893 killing process with pid 122682 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 122682 00:16:18.893 11:27:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 122682 00:16:18.893 [2024-07-13 11:27:53.554514] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.893 [2024-07-13 11:27:53.554812] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.893 [2024-07-13 11:27:53.554932] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.893 [2024-07-13 11:27:53.555041] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:16:19.151 [2024-07-13 11:27:53.753901] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:20.093 ************************************ 00:16:20.093 END TEST raid_superblock_test 00:16:20.093 ************************************ 00:16:20.093 11:27:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:20.093 00:16:20.093 real 0m11.514s 00:16:20.093 user 0m20.643s 00:16:20.093 sys 0m1.261s 00:16:20.093 11:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.093 11:27:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.093 11:27:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:20.093 11:27:54 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:16:20.093 11:27:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:20.093 11:27:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.093 11:27:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.094 ************************************ 00:16:20.094 START TEST raid_read_error_test 00:16:20.094 ************************************ 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.EKqFzxd70L 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=123076 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 123076 /var/tmp/spdk-raid.sock 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 123076 ']' 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:20.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.094 11:27:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:20.094 [2024-07-13 11:27:54.827021] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:20.094 [2024-07-13 11:27:54.827460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123076 ] 00:16:20.352 [2024-07-13 11:27:55.001122] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.610 [2024-07-13 11:27:55.239327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.869 [2024-07-13 11:27:55.429066] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:21.127 11:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:21.127 11:27:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:21.127 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:21.127 11:27:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:21.386 BaseBdev1_malloc 00:16:21.386 11:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:21.644 true 00:16:21.644 11:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:21.903 [2024-07-13 11:27:56.428519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:21.903 [2024-07-13 11:27:56.428790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.903 [2024-07-13 11:27:56.428862] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:21.903 [2024-07-13 11:27:56.429157] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.903 [2024-07-13 11:27:56.431159] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.903 [2024-07-13 11:27:56.431318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:21.903 BaseBdev1 00:16:21.903 11:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:21.903 11:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:22.161 BaseBdev2_malloc 00:16:22.161 11:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:22.419 true 00:16:22.419 11:27:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:22.419 [2024-07-13 11:27:57.140429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:22.419 [2024-07-13 11:27:57.140650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.419 [2024-07-13 11:27:57.140806] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:22.419 [2024-07-13 11:27:57.140913] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.419 [2024-07-13 11:27:57.143231] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.419 [2024-07-13 11:27:57.143390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:22.419 BaseBdev2 00:16:22.419 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:22.678 [2024-07-13 11:27:57.332523] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:22.678 [2024-07-13 11:27:57.334220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.678 [2024-07-13 11:27:57.334567] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:22.678 [2024-07-13 11:27:57.334680] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:22.678 [2024-07-13 11:27:57.334832] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:22.678 [2024-07-13 11:27:57.335234] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:22.678 [2024-07-13 11:27:57.335366] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:22.678 [2024-07-13 11:27:57.335585] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.678 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.936 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.936 "name": "raid_bdev1", 00:16:22.936 "uuid": "8e08ec5e-d120-4740-9296-c126dbeff8ad", 00:16:22.936 "strip_size_kb": 64, 00:16:22.936 "state": "online", 00:16:22.936 "raid_level": "concat", 00:16:22.936 "superblock": true, 00:16:22.936 "num_base_bdevs": 2, 00:16:22.936 "num_base_bdevs_discovered": 2, 00:16:22.936 "num_base_bdevs_operational": 2, 00:16:22.936 "base_bdevs_list": [ 00:16:22.936 { 00:16:22.936 "name": "BaseBdev1", 00:16:22.936 "uuid": "f355ead4-ba36-5e98-ab2a-a77f59dbeea8", 00:16:22.936 "is_configured": true, 00:16:22.936 "data_offset": 2048, 00:16:22.936 "data_size": 63488 00:16:22.936 }, 00:16:22.936 { 00:16:22.936 "name": "BaseBdev2", 00:16:22.936 "uuid": "a411b7da-4de3-507c-9717-7e4c15ae324e", 00:16:22.936 "is_configured": true, 00:16:22.936 "data_offset": 2048, 00:16:22.936 "data_size": 63488 00:16:22.936 } 00:16:22.936 ] 00:16:22.936 }' 00:16:22.936 11:27:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.936 11:27:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.504 11:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:23.504 11:27:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:23.762 [2024-07-13 11:27:58.305801] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:24.696 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.955 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.213 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.213 "name": "raid_bdev1", 00:16:25.213 "uuid": "8e08ec5e-d120-4740-9296-c126dbeff8ad", 00:16:25.213 "strip_size_kb": 64, 00:16:25.214 "state": "online", 00:16:25.214 "raid_level": "concat", 00:16:25.214 "superblock": true, 00:16:25.214 "num_base_bdevs": 2, 00:16:25.214 "num_base_bdevs_discovered": 2, 00:16:25.214 "num_base_bdevs_operational": 2, 00:16:25.214 "base_bdevs_list": [ 00:16:25.214 { 00:16:25.214 "name": "BaseBdev1", 00:16:25.214 "uuid": "f355ead4-ba36-5e98-ab2a-a77f59dbeea8", 00:16:25.214 "is_configured": true, 00:16:25.214 "data_offset": 2048, 00:16:25.214 "data_size": 63488 00:16:25.214 }, 00:16:25.214 { 00:16:25.214 "name": "BaseBdev2", 00:16:25.214 "uuid": "a411b7da-4de3-507c-9717-7e4c15ae324e", 00:16:25.214 "is_configured": true, 00:16:25.214 "data_offset": 2048, 00:16:25.214 "data_size": 63488 00:16:25.214 } 00:16:25.214 ] 00:16:25.214 }' 00:16:25.214 11:27:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.214 11:27:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.781 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:26.040 [2024-07-13 11:28:00.543158] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:26.040 [2024-07-13 11:28:00.543220] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.040 [2024-07-13 11:28:00.545728] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.040 [2024-07-13 11:28:00.545780] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.040 [2024-07-13 11:28:00.545819] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.040 [2024-07-13 11:28:00.545829] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:26.040 0 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 123076 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 123076 ']' 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 123076 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123076 00:16:26.040 killing process with pid 123076 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123076' 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 123076 00:16:26.040 11:28:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 123076 00:16:26.040 [2024-07-13 11:28:00.578287] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.040 [2024-07-13 11:28:00.669995] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.EKqFzxd70L 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:16:27.416 ************************************ 00:16:27.416 END TEST raid_read_error_test 00:16:27.416 ************************************ 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:16:27.416 00:16:27.416 real 0m7.003s 00:16:27.416 user 0m10.572s 00:16:27.416 sys 0m0.758s 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:27.416 11:28:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.416 11:28:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:27.416 11:28:01 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:16:27.416 11:28:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:27.416 11:28:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:27.416 11:28:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.416 ************************************ 00:16:27.416 START TEST raid_write_error_test 00:16:27.416 ************************************ 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.JGkLyNsCMD 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=123285 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 123285 /var/tmp/spdk-raid.sock 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:27.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 123285 ']' 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:27.416 11:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:27.417 11:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:27.417 11:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:27.417 11:28:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.417 [2024-07-13 11:28:01.888988] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:27.417 [2024-07-13 11:28:01.889446] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123285 ] 00:16:27.417 [2024-07-13 11:28:02.059841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.676 [2024-07-13 11:28:02.240719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.935 [2024-07-13 11:28:02.426012] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.193 11:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.193 11:28:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:28.193 11:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:28.193 11:28:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:28.451 BaseBdev1_malloc 00:16:28.451 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:28.708 true 00:16:28.708 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:28.708 [2024-07-13 11:28:03.428143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:28.708 [2024-07-13 11:28:03.428501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.708 [2024-07-13 11:28:03.428571] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:28.708 [2024-07-13 11:28:03.428829] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.708 [2024-07-13 11:28:03.431164] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.708 [2024-07-13 11:28:03.431343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:28.708 BaseBdev1 00:16:28.708 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:28.708 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:28.966 BaseBdev2_malloc 00:16:28.966 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:29.223 true 00:16:29.223 11:28:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:29.481 [2024-07-13 11:28:04.048708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:29.481 [2024-07-13 11:28:04.048951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.481 [2024-07-13 11:28:04.049022] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:16:29.481 [2024-07-13 11:28:04.049303] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.481 [2024-07-13 11:28:04.051537] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.481 [2024-07-13 11:28:04.051701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:29.481 BaseBdev2 00:16:29.481 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:29.740 [2024-07-13 11:28:04.236812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.740 [2024-07-13 11:28:04.238880] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:29.740 [2024-07-13 11:28:04.239236] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:29.740 [2024-07-13 11:28:04.239351] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:29.740 [2024-07-13 11:28:04.239497] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:29.740 [2024-07-13 11:28:04.239906] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:29.740 [2024-07-13 11:28:04.240023] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:16:29.740 [2024-07-13 11:28:04.240274] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.740 "name": "raid_bdev1", 00:16:29.740 "uuid": "65bfa601-ba84-482e-8142-3c29b3eab7e7", 00:16:29.740 "strip_size_kb": 64, 00:16:29.740 "state": "online", 00:16:29.740 "raid_level": "concat", 00:16:29.740 "superblock": true, 00:16:29.740 "num_base_bdevs": 2, 00:16:29.740 "num_base_bdevs_discovered": 2, 00:16:29.740 "num_base_bdevs_operational": 2, 00:16:29.740 "base_bdevs_list": [ 00:16:29.740 { 00:16:29.740 "name": "BaseBdev1", 00:16:29.740 "uuid": "c47a685c-0b4b-5738-abfe-5cb3c4f7e91a", 00:16:29.740 "is_configured": true, 00:16:29.740 "data_offset": 2048, 00:16:29.740 "data_size": 63488 00:16:29.740 }, 00:16:29.740 { 00:16:29.740 "name": "BaseBdev2", 00:16:29.740 "uuid": "7076f8a0-0b9e-596c-9875-7657d3ab8dfc", 00:16:29.740 "is_configured": true, 00:16:29.740 "data_offset": 2048, 00:16:29.740 "data_size": 63488 00:16:29.740 } 00:16:29.740 ] 00:16:29.740 }' 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.740 11:28:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.675 11:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:30.675 11:28:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:30.675 [2024-07-13 11:28:05.186078] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.612 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.871 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.871 "name": "raid_bdev1", 00:16:31.871 "uuid": "65bfa601-ba84-482e-8142-3c29b3eab7e7", 00:16:31.871 "strip_size_kb": 64, 00:16:31.871 "state": "online", 00:16:31.871 "raid_level": "concat", 00:16:31.871 "superblock": true, 00:16:31.871 "num_base_bdevs": 2, 00:16:31.871 "num_base_bdevs_discovered": 2, 00:16:31.871 "num_base_bdevs_operational": 2, 00:16:31.871 "base_bdevs_list": [ 00:16:31.871 { 00:16:31.871 "name": "BaseBdev1", 00:16:31.871 "uuid": "c47a685c-0b4b-5738-abfe-5cb3c4f7e91a", 00:16:31.871 "is_configured": true, 00:16:31.871 "data_offset": 2048, 00:16:31.871 "data_size": 63488 00:16:31.871 }, 00:16:31.871 { 00:16:31.871 "name": "BaseBdev2", 00:16:31.871 "uuid": "7076f8a0-0b9e-596c-9875-7657d3ab8dfc", 00:16:31.871 "is_configured": true, 00:16:31.871 "data_offset": 2048, 00:16:31.871 "data_size": 63488 00:16:31.871 } 00:16:31.871 ] 00:16:31.871 }' 00:16:31.871 11:28:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.871 11:28:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:32.808 [2024-07-13 11:28:07.517561] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.808 [2024-07-13 11:28:07.517871] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.808 [2024-07-13 11:28:07.520499] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.808 [2024-07-13 11:28:07.520738] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.808 [2024-07-13 11:28:07.520808] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.808 [2024-07-13 11:28:07.520932] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:16:32.808 0 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 123285 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 123285 ']' 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 123285 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123285 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123285' 00:16:32.808 killing process with pid 123285 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 123285 00:16:32.808 11:28:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 123285 00:16:32.808 [2024-07-13 11:28:07.551142] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.067 [2024-07-13 11:28:07.638984] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.JGkLyNsCMD 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:16:34.006 00:16:34.006 real 0m6.909s 00:16:34.006 user 0m10.352s 00:16:34.006 sys 0m0.821s 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.006 11:28:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.006 ************************************ 00:16:34.006 END TEST raid_write_error_test 00:16:34.006 ************************************ 00:16:34.264 11:28:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:34.264 11:28:08 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:34.264 11:28:08 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:16:34.264 11:28:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:34.264 11:28:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.264 11:28:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.264 ************************************ 00:16:34.264 START TEST raid_state_function_test 00:16:34.264 ************************************ 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:34.264 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=123466 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123466' 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:34.265 Process raid pid: 123466 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 123466 /var/tmp/spdk-raid.sock 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 123466 ']' 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:34.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:34.265 11:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.265 [2024-07-13 11:28:08.846542] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:34.265 [2024-07-13 11:28:08.846752] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.524 [2024-07-13 11:28:09.013332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.524 [2024-07-13 11:28:09.195213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.783 [2024-07-13 11:28:09.383303] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:35.350 [2024-07-13 11:28:09.984794] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:35.350 [2024-07-13 11:28:09.984886] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:35.350 [2024-07-13 11:28:09.984899] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.350 [2024-07-13 11:28:09.984934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:35.350 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.351 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.351 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.351 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.351 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.351 11:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.609 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.609 "name": "Existed_Raid", 00:16:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.609 "strip_size_kb": 0, 00:16:35.609 "state": "configuring", 00:16:35.609 "raid_level": "raid1", 00:16:35.609 "superblock": false, 00:16:35.609 "num_base_bdevs": 2, 00:16:35.609 "num_base_bdevs_discovered": 0, 00:16:35.609 "num_base_bdevs_operational": 2, 00:16:35.609 "base_bdevs_list": [ 00:16:35.609 { 00:16:35.609 "name": "BaseBdev1", 00:16:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.609 "is_configured": false, 00:16:35.609 "data_offset": 0, 00:16:35.609 "data_size": 0 00:16:35.609 }, 00:16:35.609 { 00:16:35.609 "name": "BaseBdev2", 00:16:35.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.609 "is_configured": false, 00:16:35.609 "data_offset": 0, 00:16:35.609 "data_size": 0 00:16:35.609 } 00:16:35.609 ] 00:16:35.609 }' 00:16:35.609 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.609 11:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.176 11:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:36.434 [2024-07-13 11:28:11.092909] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:36.434 [2024-07-13 11:28:11.092945] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:36.434 11:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:36.693 [2024-07-13 11:28:11.276945] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:36.693 [2024-07-13 11:28:11.276999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:36.693 [2024-07-13 11:28:11.277010] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:36.693 [2024-07-13 11:28:11.277034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:36.693 11:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:36.951 [2024-07-13 11:28:11.574307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:36.951 BaseBdev1 00:16:36.951 11:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:36.951 11:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:36.951 11:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:36.951 11:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:36.951 11:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:36.951 11:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:36.951 11:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.210 11:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:37.469 [ 00:16:37.469 { 00:16:37.469 "name": "BaseBdev1", 00:16:37.469 "aliases": [ 00:16:37.469 "d8d73324-1030-49a0-a03e-6c7d8495ec9a" 00:16:37.469 ], 00:16:37.469 "product_name": "Malloc disk", 00:16:37.469 "block_size": 512, 00:16:37.469 "num_blocks": 65536, 00:16:37.469 "uuid": "d8d73324-1030-49a0-a03e-6c7d8495ec9a", 00:16:37.469 "assigned_rate_limits": { 00:16:37.469 "rw_ios_per_sec": 0, 00:16:37.469 "rw_mbytes_per_sec": 0, 00:16:37.469 "r_mbytes_per_sec": 0, 00:16:37.469 "w_mbytes_per_sec": 0 00:16:37.469 }, 00:16:37.469 "claimed": true, 00:16:37.469 "claim_type": "exclusive_write", 00:16:37.469 "zoned": false, 00:16:37.469 "supported_io_types": { 00:16:37.469 "read": true, 00:16:37.469 "write": true, 00:16:37.469 "unmap": true, 00:16:37.469 "flush": true, 00:16:37.469 "reset": true, 00:16:37.469 "nvme_admin": false, 00:16:37.469 "nvme_io": false, 00:16:37.469 "nvme_io_md": false, 00:16:37.469 "write_zeroes": true, 00:16:37.469 "zcopy": true, 00:16:37.469 "get_zone_info": false, 00:16:37.469 "zone_management": false, 00:16:37.469 "zone_append": false, 00:16:37.469 "compare": false, 00:16:37.469 "compare_and_write": false, 00:16:37.469 "abort": true, 00:16:37.469 "seek_hole": false, 00:16:37.469 "seek_data": false, 00:16:37.469 "copy": true, 00:16:37.469 "nvme_iov_md": false 00:16:37.469 }, 00:16:37.469 "memory_domains": [ 00:16:37.469 { 00:16:37.469 "dma_device_id": "system", 00:16:37.469 "dma_device_type": 1 00:16:37.469 }, 00:16:37.469 { 00:16:37.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.469 "dma_device_type": 2 00:16:37.469 } 00:16:37.469 ], 00:16:37.469 "driver_specific": {} 00:16:37.469 } 00:16:37.469 ] 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.469 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.727 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.727 "name": "Existed_Raid", 00:16:37.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.727 "strip_size_kb": 0, 00:16:37.727 "state": "configuring", 00:16:37.727 "raid_level": "raid1", 00:16:37.727 "superblock": false, 00:16:37.727 "num_base_bdevs": 2, 00:16:37.727 "num_base_bdevs_discovered": 1, 00:16:37.727 "num_base_bdevs_operational": 2, 00:16:37.727 "base_bdevs_list": [ 00:16:37.727 { 00:16:37.727 "name": "BaseBdev1", 00:16:37.727 "uuid": "d8d73324-1030-49a0-a03e-6c7d8495ec9a", 00:16:37.727 "is_configured": true, 00:16:37.727 "data_offset": 0, 00:16:37.727 "data_size": 65536 00:16:37.727 }, 00:16:37.727 { 00:16:37.727 "name": "BaseBdev2", 00:16:37.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.727 "is_configured": false, 00:16:37.727 "data_offset": 0, 00:16:37.727 "data_size": 0 00:16:37.727 } 00:16:37.727 ] 00:16:37.727 }' 00:16:37.727 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.727 11:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.293 11:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:38.551 [2024-07-13 11:28:13.074613] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:38.551 [2024-07-13 11:28:13.074768] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:16:38.551 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:38.551 [2024-07-13 11:28:13.266669] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:38.551 [2024-07-13 11:28:13.268731] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.551 [2024-07-13 11:28:13.268895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.551 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:38.551 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.552 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.811 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.811 "name": "Existed_Raid", 00:16:38.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.811 "strip_size_kb": 0, 00:16:38.811 "state": "configuring", 00:16:38.811 "raid_level": "raid1", 00:16:38.811 "superblock": false, 00:16:38.811 "num_base_bdevs": 2, 00:16:38.811 "num_base_bdevs_discovered": 1, 00:16:38.811 "num_base_bdevs_operational": 2, 00:16:38.811 "base_bdevs_list": [ 00:16:38.811 { 00:16:38.811 "name": "BaseBdev1", 00:16:38.811 "uuid": "d8d73324-1030-49a0-a03e-6c7d8495ec9a", 00:16:38.811 "is_configured": true, 00:16:38.811 "data_offset": 0, 00:16:38.811 "data_size": 65536 00:16:38.811 }, 00:16:38.811 { 00:16:38.811 "name": "BaseBdev2", 00:16:38.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.811 "is_configured": false, 00:16:38.811 "data_offset": 0, 00:16:38.811 "data_size": 0 00:16:38.811 } 00:16:38.811 ] 00:16:38.811 }' 00:16:38.811 11:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.811 11:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.387 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:39.697 [2024-07-13 11:28:14.376051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.697 [2024-07-13 11:28:14.376255] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:39.697 [2024-07-13 11:28:14.376294] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:39.697 [2024-07-13 11:28:14.376529] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:39.697 [2024-07-13 11:28:14.376996] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:39.697 [2024-07-13 11:28:14.377157] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:39.697 [2024-07-13 11:28:14.377554] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.697 BaseBdev2 00:16:39.697 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:39.697 11:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:39.697 11:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:39.697 11:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:39.697 11:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:39.697 11:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:39.697 11:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:39.984 11:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:40.252 [ 00:16:40.252 { 00:16:40.252 "name": "BaseBdev2", 00:16:40.252 "aliases": [ 00:16:40.252 "b30635a4-29c8-408a-8079-554615b18a81" 00:16:40.252 ], 00:16:40.252 "product_name": "Malloc disk", 00:16:40.252 "block_size": 512, 00:16:40.252 "num_blocks": 65536, 00:16:40.252 "uuid": "b30635a4-29c8-408a-8079-554615b18a81", 00:16:40.252 "assigned_rate_limits": { 00:16:40.252 "rw_ios_per_sec": 0, 00:16:40.252 "rw_mbytes_per_sec": 0, 00:16:40.252 "r_mbytes_per_sec": 0, 00:16:40.252 "w_mbytes_per_sec": 0 00:16:40.252 }, 00:16:40.252 "claimed": true, 00:16:40.252 "claim_type": "exclusive_write", 00:16:40.252 "zoned": false, 00:16:40.252 "supported_io_types": { 00:16:40.252 "read": true, 00:16:40.252 "write": true, 00:16:40.252 "unmap": true, 00:16:40.252 "flush": true, 00:16:40.252 "reset": true, 00:16:40.252 "nvme_admin": false, 00:16:40.252 "nvme_io": false, 00:16:40.252 "nvme_io_md": false, 00:16:40.252 "write_zeroes": true, 00:16:40.252 "zcopy": true, 00:16:40.252 "get_zone_info": false, 00:16:40.252 "zone_management": false, 00:16:40.252 "zone_append": false, 00:16:40.252 "compare": false, 00:16:40.252 "compare_and_write": false, 00:16:40.252 "abort": true, 00:16:40.252 "seek_hole": false, 00:16:40.252 "seek_data": false, 00:16:40.252 "copy": true, 00:16:40.252 "nvme_iov_md": false 00:16:40.252 }, 00:16:40.252 "memory_domains": [ 00:16:40.252 { 00:16:40.252 "dma_device_id": "system", 00:16:40.252 "dma_device_type": 1 00:16:40.252 }, 00:16:40.252 { 00:16:40.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.252 "dma_device_type": 2 00:16:40.252 } 00:16:40.252 ], 00:16:40.252 "driver_specific": {} 00:16:40.252 } 00:16:40.252 ] 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.252 11:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.510 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.510 "name": "Existed_Raid", 00:16:40.510 "uuid": "97d4bf53-e5c8-40e6-956a-d3c46ed26412", 00:16:40.511 "strip_size_kb": 0, 00:16:40.511 "state": "online", 00:16:40.511 "raid_level": "raid1", 00:16:40.511 "superblock": false, 00:16:40.511 "num_base_bdevs": 2, 00:16:40.511 "num_base_bdevs_discovered": 2, 00:16:40.511 "num_base_bdevs_operational": 2, 00:16:40.511 "base_bdevs_list": [ 00:16:40.511 { 00:16:40.511 "name": "BaseBdev1", 00:16:40.511 "uuid": "d8d73324-1030-49a0-a03e-6c7d8495ec9a", 00:16:40.511 "is_configured": true, 00:16:40.511 "data_offset": 0, 00:16:40.511 "data_size": 65536 00:16:40.511 }, 00:16:40.511 { 00:16:40.511 "name": "BaseBdev2", 00:16:40.511 "uuid": "b30635a4-29c8-408a-8079-554615b18a81", 00:16:40.511 "is_configured": true, 00:16:40.511 "data_offset": 0, 00:16:40.511 "data_size": 65536 00:16:40.511 } 00:16:40.511 ] 00:16:40.511 }' 00:16:40.511 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.511 11:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.079 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:41.079 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:41.079 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:41.079 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:41.079 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:41.079 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:41.079 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:41.079 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:41.079 [2024-07-13 11:28:15.812569] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.079 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:41.079 "name": "Existed_Raid", 00:16:41.079 "aliases": [ 00:16:41.079 "97d4bf53-e5c8-40e6-956a-d3c46ed26412" 00:16:41.079 ], 00:16:41.079 "product_name": "Raid Volume", 00:16:41.079 "block_size": 512, 00:16:41.079 "num_blocks": 65536, 00:16:41.079 "uuid": "97d4bf53-e5c8-40e6-956a-d3c46ed26412", 00:16:41.079 "assigned_rate_limits": { 00:16:41.079 "rw_ios_per_sec": 0, 00:16:41.079 "rw_mbytes_per_sec": 0, 00:16:41.079 "r_mbytes_per_sec": 0, 00:16:41.079 "w_mbytes_per_sec": 0 00:16:41.079 }, 00:16:41.079 "claimed": false, 00:16:41.079 "zoned": false, 00:16:41.079 "supported_io_types": { 00:16:41.079 "read": true, 00:16:41.079 "write": true, 00:16:41.079 "unmap": false, 00:16:41.079 "flush": false, 00:16:41.079 "reset": true, 00:16:41.079 "nvme_admin": false, 00:16:41.079 "nvme_io": false, 00:16:41.079 "nvme_io_md": false, 00:16:41.079 "write_zeroes": true, 00:16:41.079 "zcopy": false, 00:16:41.079 "get_zone_info": false, 00:16:41.079 "zone_management": false, 00:16:41.079 "zone_append": false, 00:16:41.079 "compare": false, 00:16:41.079 "compare_and_write": false, 00:16:41.079 "abort": false, 00:16:41.079 "seek_hole": false, 00:16:41.079 "seek_data": false, 00:16:41.079 "copy": false, 00:16:41.079 "nvme_iov_md": false 00:16:41.079 }, 00:16:41.079 "memory_domains": [ 00:16:41.079 { 00:16:41.079 "dma_device_id": "system", 00:16:41.079 "dma_device_type": 1 00:16:41.079 }, 00:16:41.079 { 00:16:41.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.079 "dma_device_type": 2 00:16:41.079 }, 00:16:41.079 { 00:16:41.079 "dma_device_id": "system", 00:16:41.079 "dma_device_type": 1 00:16:41.079 }, 00:16:41.079 { 00:16:41.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.079 "dma_device_type": 2 00:16:41.079 } 00:16:41.079 ], 00:16:41.079 "driver_specific": { 00:16:41.079 "raid": { 00:16:41.079 "uuid": "97d4bf53-e5c8-40e6-956a-d3c46ed26412", 00:16:41.079 "strip_size_kb": 0, 00:16:41.079 "state": "online", 00:16:41.079 "raid_level": "raid1", 00:16:41.079 "superblock": false, 00:16:41.079 "num_base_bdevs": 2, 00:16:41.079 "num_base_bdevs_discovered": 2, 00:16:41.079 "num_base_bdevs_operational": 2, 00:16:41.079 "base_bdevs_list": [ 00:16:41.079 { 00:16:41.079 "name": "BaseBdev1", 00:16:41.079 "uuid": "d8d73324-1030-49a0-a03e-6c7d8495ec9a", 00:16:41.079 "is_configured": true, 00:16:41.079 "data_offset": 0, 00:16:41.079 "data_size": 65536 00:16:41.079 }, 00:16:41.079 { 00:16:41.079 "name": "BaseBdev2", 00:16:41.079 "uuid": "b30635a4-29c8-408a-8079-554615b18a81", 00:16:41.079 "is_configured": true, 00:16:41.079 "data_offset": 0, 00:16:41.079 "data_size": 65536 00:16:41.079 } 00:16:41.079 ] 00:16:41.079 } 00:16:41.079 } 00:16:41.079 }' 00:16:41.337 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:41.337 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:41.337 BaseBdev2' 00:16:41.337 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:41.337 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:41.337 11:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:41.337 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:41.337 "name": "BaseBdev1", 00:16:41.337 "aliases": [ 00:16:41.337 "d8d73324-1030-49a0-a03e-6c7d8495ec9a" 00:16:41.337 ], 00:16:41.337 "product_name": "Malloc disk", 00:16:41.337 "block_size": 512, 00:16:41.337 "num_blocks": 65536, 00:16:41.337 "uuid": "d8d73324-1030-49a0-a03e-6c7d8495ec9a", 00:16:41.337 "assigned_rate_limits": { 00:16:41.337 "rw_ios_per_sec": 0, 00:16:41.337 "rw_mbytes_per_sec": 0, 00:16:41.337 "r_mbytes_per_sec": 0, 00:16:41.337 "w_mbytes_per_sec": 0 00:16:41.337 }, 00:16:41.337 "claimed": true, 00:16:41.337 "claim_type": "exclusive_write", 00:16:41.337 "zoned": false, 00:16:41.337 "supported_io_types": { 00:16:41.337 "read": true, 00:16:41.337 "write": true, 00:16:41.337 "unmap": true, 00:16:41.337 "flush": true, 00:16:41.337 "reset": true, 00:16:41.337 "nvme_admin": false, 00:16:41.337 "nvme_io": false, 00:16:41.337 "nvme_io_md": false, 00:16:41.337 "write_zeroes": true, 00:16:41.337 "zcopy": true, 00:16:41.337 "get_zone_info": false, 00:16:41.337 "zone_management": false, 00:16:41.337 "zone_append": false, 00:16:41.337 "compare": false, 00:16:41.337 "compare_and_write": false, 00:16:41.337 "abort": true, 00:16:41.337 "seek_hole": false, 00:16:41.337 "seek_data": false, 00:16:41.337 "copy": true, 00:16:41.337 "nvme_iov_md": false 00:16:41.337 }, 00:16:41.337 "memory_domains": [ 00:16:41.337 { 00:16:41.337 "dma_device_id": "system", 00:16:41.337 "dma_device_type": 1 00:16:41.337 }, 00:16:41.337 { 00:16:41.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:41.337 "dma_device_type": 2 00:16:41.337 } 00:16:41.337 ], 00:16:41.337 "driver_specific": {} 00:16:41.337 }' 00:16:41.337 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:41.595 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:41.595 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:41.595 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.595 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:41.595 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:41.595 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.595 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:41.853 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:41.853 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.853 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:41.853 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:41.853 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:41.853 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:41.853 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:42.111 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:42.111 "name": "BaseBdev2", 00:16:42.111 "aliases": [ 00:16:42.111 "b30635a4-29c8-408a-8079-554615b18a81" 00:16:42.111 ], 00:16:42.111 "product_name": "Malloc disk", 00:16:42.111 "block_size": 512, 00:16:42.111 "num_blocks": 65536, 00:16:42.111 "uuid": "b30635a4-29c8-408a-8079-554615b18a81", 00:16:42.111 "assigned_rate_limits": { 00:16:42.111 "rw_ios_per_sec": 0, 00:16:42.111 "rw_mbytes_per_sec": 0, 00:16:42.111 "r_mbytes_per_sec": 0, 00:16:42.111 "w_mbytes_per_sec": 0 00:16:42.111 }, 00:16:42.111 "claimed": true, 00:16:42.111 "claim_type": "exclusive_write", 00:16:42.111 "zoned": false, 00:16:42.111 "supported_io_types": { 00:16:42.111 "read": true, 00:16:42.111 "write": true, 00:16:42.111 "unmap": true, 00:16:42.111 "flush": true, 00:16:42.111 "reset": true, 00:16:42.111 "nvme_admin": false, 00:16:42.111 "nvme_io": false, 00:16:42.111 "nvme_io_md": false, 00:16:42.111 "write_zeroes": true, 00:16:42.111 "zcopy": true, 00:16:42.111 "get_zone_info": false, 00:16:42.111 "zone_management": false, 00:16:42.111 "zone_append": false, 00:16:42.111 "compare": false, 00:16:42.111 "compare_and_write": false, 00:16:42.111 "abort": true, 00:16:42.111 "seek_hole": false, 00:16:42.111 "seek_data": false, 00:16:42.111 "copy": true, 00:16:42.111 "nvme_iov_md": false 00:16:42.111 }, 00:16:42.111 "memory_domains": [ 00:16:42.111 { 00:16:42.111 "dma_device_id": "system", 00:16:42.111 "dma_device_type": 1 00:16:42.111 }, 00:16:42.111 { 00:16:42.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.111 "dma_device_type": 2 00:16:42.111 } 00:16:42.111 ], 00:16:42.111 "driver_specific": {} 00:16:42.111 }' 00:16:42.111 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.111 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:42.111 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:42.111 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.369 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:42.369 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:42.369 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.369 11:28:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:42.369 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:42.369 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.369 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:42.627 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:42.627 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:42.627 [2024-07-13 11:28:17.296730] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.886 "name": "Existed_Raid", 00:16:42.886 "uuid": "97d4bf53-e5c8-40e6-956a-d3c46ed26412", 00:16:42.886 "strip_size_kb": 0, 00:16:42.886 "state": "online", 00:16:42.886 "raid_level": "raid1", 00:16:42.886 "superblock": false, 00:16:42.886 "num_base_bdevs": 2, 00:16:42.886 "num_base_bdevs_discovered": 1, 00:16:42.886 "num_base_bdevs_operational": 1, 00:16:42.886 "base_bdevs_list": [ 00:16:42.886 { 00:16:42.886 "name": null, 00:16:42.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.886 "is_configured": false, 00:16:42.886 "data_offset": 0, 00:16:42.886 "data_size": 65536 00:16:42.886 }, 00:16:42.886 { 00:16:42.886 "name": "BaseBdev2", 00:16:42.886 "uuid": "b30635a4-29c8-408a-8079-554615b18a81", 00:16:42.886 "is_configured": true, 00:16:42.886 "data_offset": 0, 00:16:42.886 "data_size": 65536 00:16:42.886 } 00:16:42.886 ] 00:16:42.886 }' 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.886 11:28:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.820 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:43.820 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:43.820 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.820 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:43.820 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:43.820 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.820 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:44.079 [2024-07-13 11:28:18.739755] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:44.079 [2024-07-13 11:28:18.740054] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:44.079 [2024-07-13 11:28:18.807886] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:44.079 [2024-07-13 11:28:18.808127] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:44.079 [2024-07-13 11:28:18.808222] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:44.079 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:44.079 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:44.079 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.079 11:28:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 123466 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 123466 ']' 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 123466 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123466 00:16:44.337 killing process with pid 123466 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123466' 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 123466 00:16:44.337 11:28:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 123466 00:16:44.337 [2024-07-13 11:28:19.068026] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:44.337 [2024-07-13 11:28:19.068123] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.721 ************************************ 00:16:45.721 END TEST raid_state_function_test 00:16:45.721 ************************************ 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:45.721 00:16:45.721 real 0m11.303s 00:16:45.721 user 0m20.076s 00:16:45.721 sys 0m1.261s 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.721 11:28:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:45.721 11:28:20 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:45.721 11:28:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:45.721 11:28:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.721 11:28:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.721 ************************************ 00:16:45.721 START TEST raid_state_function_test_sb 00:16:45.721 ************************************ 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:45.721 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:45.722 Process raid pid: 123866 00:16:45.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=123866 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 123866' 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 123866 /var/tmp/spdk-raid.sock 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 123866 ']' 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.722 11:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.722 [2024-07-13 11:28:20.194471] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:45.722 [2024-07-13 11:28:20.194880] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.722 [2024-07-13 11:28:20.360457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.980 [2024-07-13 11:28:20.541504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.238 [2024-07-13 11:28:20.729874] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.495 11:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.495 11:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:16:46.496 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:46.753 [2024-07-13 11:28:21.348292] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:46.753 [2024-07-13 11:28:21.348536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:46.753 [2024-07-13 11:28:21.348642] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:46.753 [2024-07-13 11:28:21.348704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.753 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.011 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.011 "name": "Existed_Raid", 00:16:47.011 "uuid": "6689be16-d200-4625-be41-ebe77602f525", 00:16:47.011 "strip_size_kb": 0, 00:16:47.011 "state": "configuring", 00:16:47.011 "raid_level": "raid1", 00:16:47.011 "superblock": true, 00:16:47.011 "num_base_bdevs": 2, 00:16:47.011 "num_base_bdevs_discovered": 0, 00:16:47.011 "num_base_bdevs_operational": 2, 00:16:47.011 "base_bdevs_list": [ 00:16:47.011 { 00:16:47.011 "name": "BaseBdev1", 00:16:47.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.011 "is_configured": false, 00:16:47.011 "data_offset": 0, 00:16:47.011 "data_size": 0 00:16:47.011 }, 00:16:47.011 { 00:16:47.011 "name": "BaseBdev2", 00:16:47.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.011 "is_configured": false, 00:16:47.011 "data_offset": 0, 00:16:47.011 "data_size": 0 00:16:47.011 } 00:16:47.011 ] 00:16:47.011 }' 00:16:47.011 11:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.011 11:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.943 11:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:47.943 [2024-07-13 11:28:22.520382] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:47.943 [2024-07-13 11:28:22.520530] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:47.943 11:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:48.200 [2024-07-13 11:28:22.788445] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.200 [2024-07-13 11:28:22.788610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.200 [2024-07-13 11:28:22.788704] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.200 [2024-07-13 11:28:22.788852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.200 11:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:48.458 [2024-07-13 11:28:23.041656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.458 BaseBdev1 00:16:48.458 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:48.458 11:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:48.458 11:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.458 11:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:48.458 11:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.458 11:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.458 11:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:48.715 [ 00:16:48.715 { 00:16:48.715 "name": "BaseBdev1", 00:16:48.715 "aliases": [ 00:16:48.715 "66101e3c-5f7b-4b3c-93ed-2fb977ef64a2" 00:16:48.715 ], 00:16:48.715 "product_name": "Malloc disk", 00:16:48.715 "block_size": 512, 00:16:48.715 "num_blocks": 65536, 00:16:48.715 "uuid": "66101e3c-5f7b-4b3c-93ed-2fb977ef64a2", 00:16:48.715 "assigned_rate_limits": { 00:16:48.715 "rw_ios_per_sec": 0, 00:16:48.715 "rw_mbytes_per_sec": 0, 00:16:48.715 "r_mbytes_per_sec": 0, 00:16:48.715 "w_mbytes_per_sec": 0 00:16:48.715 }, 00:16:48.715 "claimed": true, 00:16:48.715 "claim_type": "exclusive_write", 00:16:48.715 "zoned": false, 00:16:48.715 "supported_io_types": { 00:16:48.715 "read": true, 00:16:48.715 "write": true, 00:16:48.715 "unmap": true, 00:16:48.715 "flush": true, 00:16:48.715 "reset": true, 00:16:48.715 "nvme_admin": false, 00:16:48.715 "nvme_io": false, 00:16:48.715 "nvme_io_md": false, 00:16:48.715 "write_zeroes": true, 00:16:48.715 "zcopy": true, 00:16:48.715 "get_zone_info": false, 00:16:48.715 "zone_management": false, 00:16:48.715 "zone_append": false, 00:16:48.715 "compare": false, 00:16:48.715 "compare_and_write": false, 00:16:48.715 "abort": true, 00:16:48.715 "seek_hole": false, 00:16:48.715 "seek_data": false, 00:16:48.715 "copy": true, 00:16:48.715 "nvme_iov_md": false 00:16:48.715 }, 00:16:48.715 "memory_domains": [ 00:16:48.715 { 00:16:48.715 "dma_device_id": "system", 00:16:48.715 "dma_device_type": 1 00:16:48.715 }, 00:16:48.715 { 00:16:48.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.715 "dma_device_type": 2 00:16:48.715 } 00:16:48.715 ], 00:16:48.715 "driver_specific": {} 00:16:48.715 } 00:16:48.715 ] 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.715 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.973 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.973 "name": "Existed_Raid", 00:16:48.973 "uuid": "0b9803aa-1720-4361-be08-ad218119eb42", 00:16:48.973 "strip_size_kb": 0, 00:16:48.973 "state": "configuring", 00:16:48.973 "raid_level": "raid1", 00:16:48.973 "superblock": true, 00:16:48.973 "num_base_bdevs": 2, 00:16:48.973 "num_base_bdevs_discovered": 1, 00:16:48.973 "num_base_bdevs_operational": 2, 00:16:48.973 "base_bdevs_list": [ 00:16:48.973 { 00:16:48.973 "name": "BaseBdev1", 00:16:48.973 "uuid": "66101e3c-5f7b-4b3c-93ed-2fb977ef64a2", 00:16:48.973 "is_configured": true, 00:16:48.973 "data_offset": 2048, 00:16:48.973 "data_size": 63488 00:16:48.973 }, 00:16:48.973 { 00:16:48.973 "name": "BaseBdev2", 00:16:48.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.973 "is_configured": false, 00:16:48.973 "data_offset": 0, 00:16:48.973 "data_size": 0 00:16:48.973 } 00:16:48.973 ] 00:16:48.973 }' 00:16:48.973 11:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.973 11:28:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.907 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:49.907 [2024-07-13 11:28:24.489987] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.907 [2024-07-13 11:28:24.490139] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:16:49.907 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:50.165 [2024-07-13 11:28:24.686062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.165 [2024-07-13 11:28:24.688124] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.165 [2024-07-13 11:28:24.688284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.165 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.424 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.424 "name": "Existed_Raid", 00:16:50.424 "uuid": "de29b98d-6515-420c-acf3-74fb6f976dca", 00:16:50.424 "strip_size_kb": 0, 00:16:50.424 "state": "configuring", 00:16:50.424 "raid_level": "raid1", 00:16:50.424 "superblock": true, 00:16:50.424 "num_base_bdevs": 2, 00:16:50.424 "num_base_bdevs_discovered": 1, 00:16:50.424 "num_base_bdevs_operational": 2, 00:16:50.424 "base_bdevs_list": [ 00:16:50.424 { 00:16:50.424 "name": "BaseBdev1", 00:16:50.424 "uuid": "66101e3c-5f7b-4b3c-93ed-2fb977ef64a2", 00:16:50.424 "is_configured": true, 00:16:50.424 "data_offset": 2048, 00:16:50.424 "data_size": 63488 00:16:50.424 }, 00:16:50.424 { 00:16:50.424 "name": "BaseBdev2", 00:16:50.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.424 "is_configured": false, 00:16:50.424 "data_offset": 0, 00:16:50.424 "data_size": 0 00:16:50.424 } 00:16:50.424 ] 00:16:50.424 }' 00:16:50.424 11:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.424 11:28:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:50.991 11:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:51.250 [2024-07-13 11:28:25.869804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.250 [2024-07-13 11:28:25.870178] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:16:51.250 [2024-07-13 11:28:25.870302] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.250 [2024-07-13 11:28:25.870467] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:51.250 [2024-07-13 11:28:25.870929] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:16:51.250 BaseBdev2 00:16:51.250 [2024-07-13 11:28:25.871070] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:16:51.250 [2024-07-13 11:28:25.871317] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.250 11:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:51.250 11:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:51.250 11:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:51.250 11:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:51.250 11:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:51.250 11:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:51.250 11:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.509 11:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:51.768 [ 00:16:51.768 { 00:16:51.768 "name": "BaseBdev2", 00:16:51.768 "aliases": [ 00:16:51.768 "57a36e4c-0fb1-47dd-8fba-927f42c0bdeb" 00:16:51.768 ], 00:16:51.768 "product_name": "Malloc disk", 00:16:51.768 "block_size": 512, 00:16:51.768 "num_blocks": 65536, 00:16:51.768 "uuid": "57a36e4c-0fb1-47dd-8fba-927f42c0bdeb", 00:16:51.768 "assigned_rate_limits": { 00:16:51.768 "rw_ios_per_sec": 0, 00:16:51.768 "rw_mbytes_per_sec": 0, 00:16:51.768 "r_mbytes_per_sec": 0, 00:16:51.768 "w_mbytes_per_sec": 0 00:16:51.768 }, 00:16:51.768 "claimed": true, 00:16:51.768 "claim_type": "exclusive_write", 00:16:51.768 "zoned": false, 00:16:51.768 "supported_io_types": { 00:16:51.768 "read": true, 00:16:51.768 "write": true, 00:16:51.768 "unmap": true, 00:16:51.768 "flush": true, 00:16:51.768 "reset": true, 00:16:51.768 "nvme_admin": false, 00:16:51.768 "nvme_io": false, 00:16:51.768 "nvme_io_md": false, 00:16:51.768 "write_zeroes": true, 00:16:51.768 "zcopy": true, 00:16:51.768 "get_zone_info": false, 00:16:51.768 "zone_management": false, 00:16:51.768 "zone_append": false, 00:16:51.768 "compare": false, 00:16:51.768 "compare_and_write": false, 00:16:51.768 "abort": true, 00:16:51.768 "seek_hole": false, 00:16:51.768 "seek_data": false, 00:16:51.768 "copy": true, 00:16:51.768 "nvme_iov_md": false 00:16:51.768 }, 00:16:51.768 "memory_domains": [ 00:16:51.768 { 00:16:51.768 "dma_device_id": "system", 00:16:51.768 "dma_device_type": 1 00:16:51.768 }, 00:16:51.768 { 00:16:51.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.768 "dma_device_type": 2 00:16:51.768 } 00:16:51.768 ], 00:16:51.768 "driver_specific": {} 00:16:51.768 } 00:16:51.768 ] 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.768 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.027 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.027 "name": "Existed_Raid", 00:16:52.027 "uuid": "de29b98d-6515-420c-acf3-74fb6f976dca", 00:16:52.027 "strip_size_kb": 0, 00:16:52.027 "state": "online", 00:16:52.027 "raid_level": "raid1", 00:16:52.027 "superblock": true, 00:16:52.027 "num_base_bdevs": 2, 00:16:52.027 "num_base_bdevs_discovered": 2, 00:16:52.027 "num_base_bdevs_operational": 2, 00:16:52.027 "base_bdevs_list": [ 00:16:52.027 { 00:16:52.027 "name": "BaseBdev1", 00:16:52.027 "uuid": "66101e3c-5f7b-4b3c-93ed-2fb977ef64a2", 00:16:52.027 "is_configured": true, 00:16:52.027 "data_offset": 2048, 00:16:52.027 "data_size": 63488 00:16:52.027 }, 00:16:52.027 { 00:16:52.027 "name": "BaseBdev2", 00:16:52.027 "uuid": "57a36e4c-0fb1-47dd-8fba-927f42c0bdeb", 00:16:52.027 "is_configured": true, 00:16:52.027 "data_offset": 2048, 00:16:52.027 "data_size": 63488 00:16:52.027 } 00:16:52.027 ] 00:16:52.027 }' 00:16:52.027 11:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.027 11:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.594 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:52.594 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:52.594 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:52.594 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:52.594 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:52.594 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:52.594 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:52.594 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:52.853 [2024-07-13 11:28:27.542384] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.853 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:52.853 "name": "Existed_Raid", 00:16:52.853 "aliases": [ 00:16:52.853 "de29b98d-6515-420c-acf3-74fb6f976dca" 00:16:52.853 ], 00:16:52.853 "product_name": "Raid Volume", 00:16:52.853 "block_size": 512, 00:16:52.853 "num_blocks": 63488, 00:16:52.853 "uuid": "de29b98d-6515-420c-acf3-74fb6f976dca", 00:16:52.853 "assigned_rate_limits": { 00:16:52.853 "rw_ios_per_sec": 0, 00:16:52.853 "rw_mbytes_per_sec": 0, 00:16:52.853 "r_mbytes_per_sec": 0, 00:16:52.853 "w_mbytes_per_sec": 0 00:16:52.853 }, 00:16:52.853 "claimed": false, 00:16:52.853 "zoned": false, 00:16:52.853 "supported_io_types": { 00:16:52.853 "read": true, 00:16:52.853 "write": true, 00:16:52.853 "unmap": false, 00:16:52.853 "flush": false, 00:16:52.853 "reset": true, 00:16:52.853 "nvme_admin": false, 00:16:52.853 "nvme_io": false, 00:16:52.853 "nvme_io_md": false, 00:16:52.853 "write_zeroes": true, 00:16:52.853 "zcopy": false, 00:16:52.853 "get_zone_info": false, 00:16:52.853 "zone_management": false, 00:16:52.853 "zone_append": false, 00:16:52.853 "compare": false, 00:16:52.853 "compare_and_write": false, 00:16:52.853 "abort": false, 00:16:52.853 "seek_hole": false, 00:16:52.853 "seek_data": false, 00:16:52.853 "copy": false, 00:16:52.853 "nvme_iov_md": false 00:16:52.853 }, 00:16:52.853 "memory_domains": [ 00:16:52.853 { 00:16:52.853 "dma_device_id": "system", 00:16:52.853 "dma_device_type": 1 00:16:52.853 }, 00:16:52.853 { 00:16:52.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.853 "dma_device_type": 2 00:16:52.853 }, 00:16:52.853 { 00:16:52.853 "dma_device_id": "system", 00:16:52.853 "dma_device_type": 1 00:16:52.853 }, 00:16:52.853 { 00:16:52.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.853 "dma_device_type": 2 00:16:52.853 } 00:16:52.853 ], 00:16:52.853 "driver_specific": { 00:16:52.853 "raid": { 00:16:52.853 "uuid": "de29b98d-6515-420c-acf3-74fb6f976dca", 00:16:52.853 "strip_size_kb": 0, 00:16:52.853 "state": "online", 00:16:52.853 "raid_level": "raid1", 00:16:52.853 "superblock": true, 00:16:52.853 "num_base_bdevs": 2, 00:16:52.853 "num_base_bdevs_discovered": 2, 00:16:52.853 "num_base_bdevs_operational": 2, 00:16:52.853 "base_bdevs_list": [ 00:16:52.853 { 00:16:52.853 "name": "BaseBdev1", 00:16:52.853 "uuid": "66101e3c-5f7b-4b3c-93ed-2fb977ef64a2", 00:16:52.853 "is_configured": true, 00:16:52.853 "data_offset": 2048, 00:16:52.853 "data_size": 63488 00:16:52.853 }, 00:16:52.853 { 00:16:52.853 "name": "BaseBdev2", 00:16:52.853 "uuid": "57a36e4c-0fb1-47dd-8fba-927f42c0bdeb", 00:16:52.853 "is_configured": true, 00:16:52.853 "data_offset": 2048, 00:16:52.853 "data_size": 63488 00:16:52.853 } 00:16:52.853 ] 00:16:52.853 } 00:16:52.853 } 00:16:52.853 }' 00:16:52.853 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:53.112 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:53.112 BaseBdev2' 00:16:53.112 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.112 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:53.112 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.112 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.112 "name": "BaseBdev1", 00:16:53.112 "aliases": [ 00:16:53.112 "66101e3c-5f7b-4b3c-93ed-2fb977ef64a2" 00:16:53.112 ], 00:16:53.112 "product_name": "Malloc disk", 00:16:53.112 "block_size": 512, 00:16:53.112 "num_blocks": 65536, 00:16:53.112 "uuid": "66101e3c-5f7b-4b3c-93ed-2fb977ef64a2", 00:16:53.112 "assigned_rate_limits": { 00:16:53.112 "rw_ios_per_sec": 0, 00:16:53.112 "rw_mbytes_per_sec": 0, 00:16:53.112 "r_mbytes_per_sec": 0, 00:16:53.112 "w_mbytes_per_sec": 0 00:16:53.112 }, 00:16:53.112 "claimed": true, 00:16:53.112 "claim_type": "exclusive_write", 00:16:53.112 "zoned": false, 00:16:53.112 "supported_io_types": { 00:16:53.112 "read": true, 00:16:53.112 "write": true, 00:16:53.112 "unmap": true, 00:16:53.112 "flush": true, 00:16:53.112 "reset": true, 00:16:53.112 "nvme_admin": false, 00:16:53.112 "nvme_io": false, 00:16:53.112 "nvme_io_md": false, 00:16:53.112 "write_zeroes": true, 00:16:53.112 "zcopy": true, 00:16:53.112 "get_zone_info": false, 00:16:53.112 "zone_management": false, 00:16:53.112 "zone_append": false, 00:16:53.112 "compare": false, 00:16:53.112 "compare_and_write": false, 00:16:53.112 "abort": true, 00:16:53.112 "seek_hole": false, 00:16:53.112 "seek_data": false, 00:16:53.112 "copy": true, 00:16:53.112 "nvme_iov_md": false 00:16:53.112 }, 00:16:53.112 "memory_domains": [ 00:16:53.112 { 00:16:53.112 "dma_device_id": "system", 00:16:53.112 "dma_device_type": 1 00:16:53.112 }, 00:16:53.112 { 00:16:53.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.112 "dma_device_type": 2 00:16:53.112 } 00:16:53.112 ], 00:16:53.112 "driver_specific": {} 00:16:53.112 }' 00:16:53.112 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.370 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.370 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:53.370 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.370 11:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:53.371 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:53.371 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.371 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:53.629 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:53.629 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.629 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:53.629 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:53.629 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:53.629 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:53.629 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:53.888 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:53.888 "name": "BaseBdev2", 00:16:53.888 "aliases": [ 00:16:53.888 "57a36e4c-0fb1-47dd-8fba-927f42c0bdeb" 00:16:53.888 ], 00:16:53.888 "product_name": "Malloc disk", 00:16:53.888 "block_size": 512, 00:16:53.888 "num_blocks": 65536, 00:16:53.888 "uuid": "57a36e4c-0fb1-47dd-8fba-927f42c0bdeb", 00:16:53.888 "assigned_rate_limits": { 00:16:53.888 "rw_ios_per_sec": 0, 00:16:53.888 "rw_mbytes_per_sec": 0, 00:16:53.888 "r_mbytes_per_sec": 0, 00:16:53.888 "w_mbytes_per_sec": 0 00:16:53.888 }, 00:16:53.888 "claimed": true, 00:16:53.888 "claim_type": "exclusive_write", 00:16:53.888 "zoned": false, 00:16:53.888 "supported_io_types": { 00:16:53.888 "read": true, 00:16:53.888 "write": true, 00:16:53.888 "unmap": true, 00:16:53.888 "flush": true, 00:16:53.888 "reset": true, 00:16:53.888 "nvme_admin": false, 00:16:53.888 "nvme_io": false, 00:16:53.888 "nvme_io_md": false, 00:16:53.888 "write_zeroes": true, 00:16:53.888 "zcopy": true, 00:16:53.888 "get_zone_info": false, 00:16:53.888 "zone_management": false, 00:16:53.888 "zone_append": false, 00:16:53.888 "compare": false, 00:16:53.888 "compare_and_write": false, 00:16:53.888 "abort": true, 00:16:53.888 "seek_hole": false, 00:16:53.888 "seek_data": false, 00:16:53.888 "copy": true, 00:16:53.888 "nvme_iov_md": false 00:16:53.888 }, 00:16:53.888 "memory_domains": [ 00:16:53.888 { 00:16:53.888 "dma_device_id": "system", 00:16:53.888 "dma_device_type": 1 00:16:53.888 }, 00:16:53.888 { 00:16:53.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.888 "dma_device_type": 2 00:16:53.888 } 00:16:53.888 ], 00:16:53.888 "driver_specific": {} 00:16:53.888 }' 00:16:53.888 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:53.888 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.146 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:54.146 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.146 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.146 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.146 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.146 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.405 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.405 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.405 11:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.405 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.405 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:54.664 [2024-07-13 11:28:29.194572] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.664 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.923 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.923 "name": "Existed_Raid", 00:16:54.923 "uuid": "de29b98d-6515-420c-acf3-74fb6f976dca", 00:16:54.923 "strip_size_kb": 0, 00:16:54.923 "state": "online", 00:16:54.923 "raid_level": "raid1", 00:16:54.923 "superblock": true, 00:16:54.923 "num_base_bdevs": 2, 00:16:54.923 "num_base_bdevs_discovered": 1, 00:16:54.923 "num_base_bdevs_operational": 1, 00:16:54.923 "base_bdevs_list": [ 00:16:54.923 { 00:16:54.923 "name": null, 00:16:54.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.923 "is_configured": false, 00:16:54.923 "data_offset": 2048, 00:16:54.923 "data_size": 63488 00:16:54.923 }, 00:16:54.923 { 00:16:54.923 "name": "BaseBdev2", 00:16:54.923 "uuid": "57a36e4c-0fb1-47dd-8fba-927f42c0bdeb", 00:16:54.923 "is_configured": true, 00:16:54.923 "data_offset": 2048, 00:16:54.923 "data_size": 63488 00:16:54.923 } 00:16:54.923 ] 00:16:54.923 }' 00:16:54.923 11:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.923 11:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.490 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:55.490 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:55.490 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.490 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:55.749 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:55.749 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.749 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:56.008 [2024-07-13 11:28:30.641926] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:56.008 [2024-07-13 11:28:30.642172] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.008 [2024-07-13 11:28:30.709873] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.008 [2024-07-13 11:28:30.710100] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.008 [2024-07-13 11:28:30.710194] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:16:56.008 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:56.008 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:56.008 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.008 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 123866 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 123866 ']' 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 123866 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123866 00:16:56.267 killing process with pid 123866 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123866' 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 123866 00:16:56.267 11:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 123866 00:16:56.267 [2024-07-13 11:28:30.922898] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.267 [2024-07-13 11:28:30.923020] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:57.204 ************************************ 00:16:57.204 END TEST raid_state_function_test_sb 00:16:57.204 ************************************ 00:16:57.204 11:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:57.204 00:16:57.204 real 0m11.820s 00:16:57.204 user 0m21.034s 00:16:57.204 sys 0m1.383s 00:16:57.204 11:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:57.204 11:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.463 11:28:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:57.463 11:28:31 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:57.463 11:28:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:57.463 11:28:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.463 11:28:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:57.463 ************************************ 00:16:57.463 START TEST raid_superblock_test 00:16:57.463 ************************************ 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:57.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=124277 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 124277 /var/tmp/spdk-raid.sock 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 124277 ']' 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.463 11:28:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.463 [2024-07-13 11:28:32.071362] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:57.463 [2024-07-13 11:28:32.071797] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124277 ] 00:16:57.722 [2024-07-13 11:28:32.223234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.722 [2024-07-13 11:28:32.414817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.982 [2024-07-13 11:28:32.599915] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:58.548 11:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.548 11:28:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:58.548 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:58.548 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:58.548 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:58.548 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:58.548 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:58.549 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:58.549 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:58.549 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:58.549 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:58.807 malloc1 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.807 [2024-07-13 11:28:33.490739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.807 [2024-07-13 11:28:33.491103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.807 [2024-07-13 11:28:33.491170] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:58.807 [2024-07-13 11:28:33.491464] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.807 [2024-07-13 11:28:33.493713] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.807 [2024-07-13 11:28:33.493876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.807 pt1 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:58.807 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:59.065 malloc2 00:16:59.065 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:59.323 [2024-07-13 11:28:33.910413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:59.323 [2024-07-13 11:28:33.910643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.323 [2024-07-13 11:28:33.910709] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:16:59.323 [2024-07-13 11:28:33.911002] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.323 [2024-07-13 11:28:33.913188] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.323 [2024-07-13 11:28:33.913349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:59.323 pt2 00:16:59.323 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:59.323 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:59.323 11:28:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:59.580 [2024-07-13 11:28:34.150528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:59.580 [2024-07-13 11:28:34.152308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:59.580 [2024-07-13 11:28:34.152616] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:16:59.580 [2024-07-13 11:28:34.152736] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:59.580 [2024-07-13 11:28:34.152893] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:59.580 [2024-07-13 11:28:34.153386] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:16:59.580 [2024-07-13 11:28:34.153510] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:16:59.580 [2024-07-13 11:28:34.153743] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.580 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.838 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.838 "name": "raid_bdev1", 00:16:59.838 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:16:59.838 "strip_size_kb": 0, 00:16:59.838 "state": "online", 00:16:59.838 "raid_level": "raid1", 00:16:59.838 "superblock": true, 00:16:59.838 "num_base_bdevs": 2, 00:16:59.838 "num_base_bdevs_discovered": 2, 00:16:59.839 "num_base_bdevs_operational": 2, 00:16:59.839 "base_bdevs_list": [ 00:16:59.839 { 00:16:59.839 "name": "pt1", 00:16:59.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.839 "is_configured": true, 00:16:59.839 "data_offset": 2048, 00:16:59.839 "data_size": 63488 00:16:59.839 }, 00:16:59.839 { 00:16:59.839 "name": "pt2", 00:16:59.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.839 "is_configured": true, 00:16:59.839 "data_offset": 2048, 00:16:59.839 "data_size": 63488 00:16:59.839 } 00:16:59.839 ] 00:16:59.839 }' 00:16:59.839 11:28:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.839 11:28:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.406 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:00.406 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:00.406 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:00.406 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:00.406 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:00.406 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:00.406 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:00.406 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:00.665 [2024-07-13 11:28:35.202877] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.665 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:00.665 "name": "raid_bdev1", 00:17:00.665 "aliases": [ 00:17:00.665 "3328a8cf-845d-408a-a3d8-c25147d8a2e8" 00:17:00.665 ], 00:17:00.665 "product_name": "Raid Volume", 00:17:00.665 "block_size": 512, 00:17:00.665 "num_blocks": 63488, 00:17:00.665 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:17:00.665 "assigned_rate_limits": { 00:17:00.665 "rw_ios_per_sec": 0, 00:17:00.665 "rw_mbytes_per_sec": 0, 00:17:00.665 "r_mbytes_per_sec": 0, 00:17:00.665 "w_mbytes_per_sec": 0 00:17:00.665 }, 00:17:00.665 "claimed": false, 00:17:00.665 "zoned": false, 00:17:00.665 "supported_io_types": { 00:17:00.665 "read": true, 00:17:00.665 "write": true, 00:17:00.665 "unmap": false, 00:17:00.665 "flush": false, 00:17:00.665 "reset": true, 00:17:00.665 "nvme_admin": false, 00:17:00.665 "nvme_io": false, 00:17:00.665 "nvme_io_md": false, 00:17:00.665 "write_zeroes": true, 00:17:00.665 "zcopy": false, 00:17:00.665 "get_zone_info": false, 00:17:00.665 "zone_management": false, 00:17:00.665 "zone_append": false, 00:17:00.665 "compare": false, 00:17:00.665 "compare_and_write": false, 00:17:00.665 "abort": false, 00:17:00.665 "seek_hole": false, 00:17:00.665 "seek_data": false, 00:17:00.665 "copy": false, 00:17:00.665 "nvme_iov_md": false 00:17:00.665 }, 00:17:00.665 "memory_domains": [ 00:17:00.665 { 00:17:00.665 "dma_device_id": "system", 00:17:00.665 "dma_device_type": 1 00:17:00.665 }, 00:17:00.665 { 00:17:00.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.665 "dma_device_type": 2 00:17:00.665 }, 00:17:00.665 { 00:17:00.665 "dma_device_id": "system", 00:17:00.665 "dma_device_type": 1 00:17:00.665 }, 00:17:00.665 { 00:17:00.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.665 "dma_device_type": 2 00:17:00.665 } 00:17:00.665 ], 00:17:00.665 "driver_specific": { 00:17:00.665 "raid": { 00:17:00.665 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:17:00.665 "strip_size_kb": 0, 00:17:00.665 "state": "online", 00:17:00.665 "raid_level": "raid1", 00:17:00.665 "superblock": true, 00:17:00.665 "num_base_bdevs": 2, 00:17:00.665 "num_base_bdevs_discovered": 2, 00:17:00.665 "num_base_bdevs_operational": 2, 00:17:00.665 "base_bdevs_list": [ 00:17:00.665 { 00:17:00.665 "name": "pt1", 00:17:00.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.665 "is_configured": true, 00:17:00.665 "data_offset": 2048, 00:17:00.665 "data_size": 63488 00:17:00.665 }, 00:17:00.665 { 00:17:00.665 "name": "pt2", 00:17:00.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.665 "is_configured": true, 00:17:00.665 "data_offset": 2048, 00:17:00.665 "data_size": 63488 00:17:00.665 } 00:17:00.665 ] 00:17:00.665 } 00:17:00.665 } 00:17:00.665 }' 00:17:00.665 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:00.665 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:00.665 pt2' 00:17:00.665 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:00.665 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:00.665 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.924 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.924 "name": "pt1", 00:17:00.924 "aliases": [ 00:17:00.924 "00000000-0000-0000-0000-000000000001" 00:17:00.924 ], 00:17:00.924 "product_name": "passthru", 00:17:00.924 "block_size": 512, 00:17:00.924 "num_blocks": 65536, 00:17:00.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:00.924 "assigned_rate_limits": { 00:17:00.924 "rw_ios_per_sec": 0, 00:17:00.924 "rw_mbytes_per_sec": 0, 00:17:00.924 "r_mbytes_per_sec": 0, 00:17:00.924 "w_mbytes_per_sec": 0 00:17:00.924 }, 00:17:00.924 "claimed": true, 00:17:00.924 "claim_type": "exclusive_write", 00:17:00.924 "zoned": false, 00:17:00.924 "supported_io_types": { 00:17:00.924 "read": true, 00:17:00.924 "write": true, 00:17:00.924 "unmap": true, 00:17:00.924 "flush": true, 00:17:00.924 "reset": true, 00:17:00.924 "nvme_admin": false, 00:17:00.924 "nvme_io": false, 00:17:00.924 "nvme_io_md": false, 00:17:00.924 "write_zeroes": true, 00:17:00.924 "zcopy": true, 00:17:00.924 "get_zone_info": false, 00:17:00.924 "zone_management": false, 00:17:00.924 "zone_append": false, 00:17:00.924 "compare": false, 00:17:00.924 "compare_and_write": false, 00:17:00.924 "abort": true, 00:17:00.924 "seek_hole": false, 00:17:00.924 "seek_data": false, 00:17:00.924 "copy": true, 00:17:00.924 "nvme_iov_md": false 00:17:00.924 }, 00:17:00.924 "memory_domains": [ 00:17:00.924 { 00:17:00.924 "dma_device_id": "system", 00:17:00.924 "dma_device_type": 1 00:17:00.924 }, 00:17:00.924 { 00:17:00.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.924 "dma_device_type": 2 00:17:00.924 } 00:17:00.924 ], 00:17:00.924 "driver_specific": { 00:17:00.924 "passthru": { 00:17:00.924 "name": "pt1", 00:17:00.924 "base_bdev_name": "malloc1" 00:17:00.924 } 00:17:00.924 } 00:17:00.924 }' 00:17:00.924 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.924 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.924 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.924 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.924 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:01.183 11:28:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:01.442 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:01.442 "name": "pt2", 00:17:01.442 "aliases": [ 00:17:01.442 "00000000-0000-0000-0000-000000000002" 00:17:01.442 ], 00:17:01.442 "product_name": "passthru", 00:17:01.442 "block_size": 512, 00:17:01.442 "num_blocks": 65536, 00:17:01.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:01.442 "assigned_rate_limits": { 00:17:01.442 "rw_ios_per_sec": 0, 00:17:01.442 "rw_mbytes_per_sec": 0, 00:17:01.442 "r_mbytes_per_sec": 0, 00:17:01.442 "w_mbytes_per_sec": 0 00:17:01.442 }, 00:17:01.442 "claimed": true, 00:17:01.442 "claim_type": "exclusive_write", 00:17:01.442 "zoned": false, 00:17:01.442 "supported_io_types": { 00:17:01.442 "read": true, 00:17:01.442 "write": true, 00:17:01.442 "unmap": true, 00:17:01.442 "flush": true, 00:17:01.442 "reset": true, 00:17:01.442 "nvme_admin": false, 00:17:01.442 "nvme_io": false, 00:17:01.442 "nvme_io_md": false, 00:17:01.442 "write_zeroes": true, 00:17:01.442 "zcopy": true, 00:17:01.442 "get_zone_info": false, 00:17:01.442 "zone_management": false, 00:17:01.442 "zone_append": false, 00:17:01.442 "compare": false, 00:17:01.442 "compare_and_write": false, 00:17:01.442 "abort": true, 00:17:01.442 "seek_hole": false, 00:17:01.442 "seek_data": false, 00:17:01.442 "copy": true, 00:17:01.442 "nvme_iov_md": false 00:17:01.442 }, 00:17:01.442 "memory_domains": [ 00:17:01.442 { 00:17:01.442 "dma_device_id": "system", 00:17:01.442 "dma_device_type": 1 00:17:01.442 }, 00:17:01.442 { 00:17:01.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.442 "dma_device_type": 2 00:17:01.442 } 00:17:01.442 ], 00:17:01.442 "driver_specific": { 00:17:01.442 "passthru": { 00:17:01.442 "name": "pt2", 00:17:01.442 "base_bdev_name": "malloc2" 00:17:01.442 } 00:17:01.442 } 00:17:01.442 }' 00:17:01.442 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.442 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:01.701 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:01.701 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.701 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:01.701 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:01.701 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.701 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:01.701 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:01.701 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.701 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:01.959 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:01.959 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:01.959 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:02.217 [2024-07-13 11:28:36.727161] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.217 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3328a8cf-845d-408a-a3d8-c25147d8a2e8 00:17:02.217 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3328a8cf-845d-408a-a3d8-c25147d8a2e8 ']' 00:17:02.217 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:02.475 [2024-07-13 11:28:36.987014] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.475 [2024-07-13 11:28:36.987151] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.475 [2024-07-13 11:28:36.987292] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.475 [2024-07-13 11:28:36.987447] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.475 [2024-07-13 11:28:36.987549] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:17:02.475 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.475 11:28:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:02.475 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:02.475 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:02.475 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.475 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:02.732 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:02.732 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:02.991 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:02.991 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:03.250 11:28:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:03.510 [2024-07-13 11:28:38.003204] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:03.510 [2024-07-13 11:28:38.007929] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:03.510 [2024-07-13 11:28:38.008338] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:03.510 [2024-07-13 11:28:38.008773] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:03.510 [2024-07-13 11:28:38.009073] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:03.510 [2024-07-13 11:28:38.009291] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:17:03.510 request: 00:17:03.510 { 00:17:03.510 "name": "raid_bdev1", 00:17:03.510 "raid_level": "raid1", 00:17:03.510 "base_bdevs": [ 00:17:03.510 "malloc1", 00:17:03.510 "malloc2" 00:17:03.510 ], 00:17:03.510 "superblock": false, 00:17:03.510 "method": "bdev_raid_create", 00:17:03.510 "req_id": 1 00:17:03.510 } 00:17:03.510 Got JSON-RPC error response 00:17:03.510 response: 00:17:03.510 { 00:17:03.510 "code": -17, 00:17:03.510 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:03.510 } 00:17:03.510 11:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:03.510 11:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:03.510 11:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:03.510 11:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:03.510 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:03.510 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.510 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:03.510 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:03.510 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:03.768 [2024-07-13 11:28:38.404464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:03.768 [2024-07-13 11:28:38.404659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.769 [2024-07-13 11:28:38.404721] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:03.769 [2024-07-13 11:28:38.404839] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.769 [2024-07-13 11:28:38.406898] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.769 [2024-07-13 11:28:38.407111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:03.769 [2024-07-13 11:28:38.407326] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:03.769 [2024-07-13 11:28:38.407489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:03.769 pt1 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.769 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.027 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.027 "name": "raid_bdev1", 00:17:04.027 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:17:04.027 "strip_size_kb": 0, 00:17:04.027 "state": "configuring", 00:17:04.027 "raid_level": "raid1", 00:17:04.027 "superblock": true, 00:17:04.027 "num_base_bdevs": 2, 00:17:04.027 "num_base_bdevs_discovered": 1, 00:17:04.027 "num_base_bdevs_operational": 2, 00:17:04.027 "base_bdevs_list": [ 00:17:04.027 { 00:17:04.027 "name": "pt1", 00:17:04.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.027 "is_configured": true, 00:17:04.027 "data_offset": 2048, 00:17:04.027 "data_size": 63488 00:17:04.027 }, 00:17:04.027 { 00:17:04.027 "name": null, 00:17:04.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.027 "is_configured": false, 00:17:04.027 "data_offset": 2048, 00:17:04.027 "data_size": 63488 00:17:04.027 } 00:17:04.027 ] 00:17:04.027 }' 00:17:04.027 11:28:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.027 11:28:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.592 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:04.592 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:04.592 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:04.592 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.851 [2024-07-13 11:28:39.452705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.851 [2024-07-13 11:28:39.453068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.851 [2024-07-13 11:28:39.453284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:04.851 [2024-07-13 11:28:39.453478] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.851 [2024-07-13 11:28:39.454198] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.851 [2024-07-13 11:28:39.454460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.851 [2024-07-13 11:28:39.454741] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:04.851 [2024-07-13 11:28:39.455030] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:04.851 [2024-07-13 11:28:39.455435] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:17:04.851 [2024-07-13 11:28:39.455622] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.851 [2024-07-13 11:28:39.455893] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:17:04.851 [2024-07-13 11:28:39.456374] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:17:04.851 [2024-07-13 11:28:39.456569] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:17:04.851 [2024-07-13 11:28:39.456932] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.851 pt2 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.851 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.110 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.110 "name": "raid_bdev1", 00:17:05.110 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:17:05.110 "strip_size_kb": 0, 00:17:05.110 "state": "online", 00:17:05.110 "raid_level": "raid1", 00:17:05.110 "superblock": true, 00:17:05.110 "num_base_bdevs": 2, 00:17:05.110 "num_base_bdevs_discovered": 2, 00:17:05.110 "num_base_bdevs_operational": 2, 00:17:05.110 "base_bdevs_list": [ 00:17:05.110 { 00:17:05.110 "name": "pt1", 00:17:05.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.110 "is_configured": true, 00:17:05.110 "data_offset": 2048, 00:17:05.110 "data_size": 63488 00:17:05.110 }, 00:17:05.110 { 00:17:05.110 "name": "pt2", 00:17:05.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.110 "is_configured": true, 00:17:05.110 "data_offset": 2048, 00:17:05.110 "data_size": 63488 00:17:05.110 } 00:17:05.110 ] 00:17:05.110 }' 00:17:05.110 11:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.110 11:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.676 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:05.676 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:05.676 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:05.676 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:05.676 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:05.676 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:05.676 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:05.676 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:05.947 [2024-07-13 11:28:40.461305] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.947 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:05.947 "name": "raid_bdev1", 00:17:05.947 "aliases": [ 00:17:05.947 "3328a8cf-845d-408a-a3d8-c25147d8a2e8" 00:17:05.947 ], 00:17:05.947 "product_name": "Raid Volume", 00:17:05.947 "block_size": 512, 00:17:05.947 "num_blocks": 63488, 00:17:05.947 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:17:05.947 "assigned_rate_limits": { 00:17:05.947 "rw_ios_per_sec": 0, 00:17:05.947 "rw_mbytes_per_sec": 0, 00:17:05.947 "r_mbytes_per_sec": 0, 00:17:05.947 "w_mbytes_per_sec": 0 00:17:05.947 }, 00:17:05.947 "claimed": false, 00:17:05.947 "zoned": false, 00:17:05.947 "supported_io_types": { 00:17:05.947 "read": true, 00:17:05.947 "write": true, 00:17:05.947 "unmap": false, 00:17:05.947 "flush": false, 00:17:05.947 "reset": true, 00:17:05.947 "nvme_admin": false, 00:17:05.947 "nvme_io": false, 00:17:05.947 "nvme_io_md": false, 00:17:05.947 "write_zeroes": true, 00:17:05.947 "zcopy": false, 00:17:05.947 "get_zone_info": false, 00:17:05.947 "zone_management": false, 00:17:05.947 "zone_append": false, 00:17:05.947 "compare": false, 00:17:05.947 "compare_and_write": false, 00:17:05.947 "abort": false, 00:17:05.947 "seek_hole": false, 00:17:05.947 "seek_data": false, 00:17:05.947 "copy": false, 00:17:05.947 "nvme_iov_md": false 00:17:05.947 }, 00:17:05.947 "memory_domains": [ 00:17:05.947 { 00:17:05.947 "dma_device_id": "system", 00:17:05.947 "dma_device_type": 1 00:17:05.947 }, 00:17:05.947 { 00:17:05.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.947 "dma_device_type": 2 00:17:05.947 }, 00:17:05.947 { 00:17:05.947 "dma_device_id": "system", 00:17:05.947 "dma_device_type": 1 00:17:05.947 }, 00:17:05.947 { 00:17:05.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.947 "dma_device_type": 2 00:17:05.947 } 00:17:05.947 ], 00:17:05.947 "driver_specific": { 00:17:05.947 "raid": { 00:17:05.947 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:17:05.947 "strip_size_kb": 0, 00:17:05.947 "state": "online", 00:17:05.947 "raid_level": "raid1", 00:17:05.947 "superblock": true, 00:17:05.947 "num_base_bdevs": 2, 00:17:05.947 "num_base_bdevs_discovered": 2, 00:17:05.947 "num_base_bdevs_operational": 2, 00:17:05.947 "base_bdevs_list": [ 00:17:05.947 { 00:17:05.947 "name": "pt1", 00:17:05.947 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.947 "is_configured": true, 00:17:05.947 "data_offset": 2048, 00:17:05.947 "data_size": 63488 00:17:05.947 }, 00:17:05.947 { 00:17:05.947 "name": "pt2", 00:17:05.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.947 "is_configured": true, 00:17:05.947 "data_offset": 2048, 00:17:05.947 "data_size": 63488 00:17:05.947 } 00:17:05.947 ] 00:17:05.947 } 00:17:05.947 } 00:17:05.947 }' 00:17:05.947 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:05.947 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:05.947 pt2' 00:17:05.947 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:05.947 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:05.947 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.231 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.231 "name": "pt1", 00:17:06.231 "aliases": [ 00:17:06.231 "00000000-0000-0000-0000-000000000001" 00:17:06.231 ], 00:17:06.231 "product_name": "passthru", 00:17:06.231 "block_size": 512, 00:17:06.231 "num_blocks": 65536, 00:17:06.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.231 "assigned_rate_limits": { 00:17:06.231 "rw_ios_per_sec": 0, 00:17:06.231 "rw_mbytes_per_sec": 0, 00:17:06.231 "r_mbytes_per_sec": 0, 00:17:06.231 "w_mbytes_per_sec": 0 00:17:06.231 }, 00:17:06.231 "claimed": true, 00:17:06.231 "claim_type": "exclusive_write", 00:17:06.231 "zoned": false, 00:17:06.231 "supported_io_types": { 00:17:06.231 "read": true, 00:17:06.231 "write": true, 00:17:06.231 "unmap": true, 00:17:06.231 "flush": true, 00:17:06.231 "reset": true, 00:17:06.231 "nvme_admin": false, 00:17:06.231 "nvme_io": false, 00:17:06.231 "nvme_io_md": false, 00:17:06.231 "write_zeroes": true, 00:17:06.231 "zcopy": true, 00:17:06.231 "get_zone_info": false, 00:17:06.231 "zone_management": false, 00:17:06.231 "zone_append": false, 00:17:06.231 "compare": false, 00:17:06.231 "compare_and_write": false, 00:17:06.231 "abort": true, 00:17:06.231 "seek_hole": false, 00:17:06.231 "seek_data": false, 00:17:06.231 "copy": true, 00:17:06.231 "nvme_iov_md": false 00:17:06.231 }, 00:17:06.231 "memory_domains": [ 00:17:06.231 { 00:17:06.231 "dma_device_id": "system", 00:17:06.231 "dma_device_type": 1 00:17:06.231 }, 00:17:06.231 { 00:17:06.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.231 "dma_device_type": 2 00:17:06.231 } 00:17:06.231 ], 00:17:06.231 "driver_specific": { 00:17:06.231 "passthru": { 00:17:06.231 "name": "pt1", 00:17:06.231 "base_bdev_name": "malloc1" 00:17:06.231 } 00:17:06.231 } 00:17:06.231 }' 00:17:06.231 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.231 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.231 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:06.231 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.231 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:06.493 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:06.493 11:28:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.493 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:06.493 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:06.493 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.493 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:06.493 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:06.493 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:06.493 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:06.493 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:06.751 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:06.751 "name": "pt2", 00:17:06.751 "aliases": [ 00:17:06.751 "00000000-0000-0000-0000-000000000002" 00:17:06.751 ], 00:17:06.751 "product_name": "passthru", 00:17:06.751 "block_size": 512, 00:17:06.751 "num_blocks": 65536, 00:17:06.751 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.751 "assigned_rate_limits": { 00:17:06.751 "rw_ios_per_sec": 0, 00:17:06.751 "rw_mbytes_per_sec": 0, 00:17:06.751 "r_mbytes_per_sec": 0, 00:17:06.751 "w_mbytes_per_sec": 0 00:17:06.751 }, 00:17:06.751 "claimed": true, 00:17:06.751 "claim_type": "exclusive_write", 00:17:06.751 "zoned": false, 00:17:06.751 "supported_io_types": { 00:17:06.751 "read": true, 00:17:06.751 "write": true, 00:17:06.751 "unmap": true, 00:17:06.751 "flush": true, 00:17:06.751 "reset": true, 00:17:06.751 "nvme_admin": false, 00:17:06.751 "nvme_io": false, 00:17:06.751 "nvme_io_md": false, 00:17:06.751 "write_zeroes": true, 00:17:06.751 "zcopy": true, 00:17:06.751 "get_zone_info": false, 00:17:06.751 "zone_management": false, 00:17:06.751 "zone_append": false, 00:17:06.751 "compare": false, 00:17:06.751 "compare_and_write": false, 00:17:06.751 "abort": true, 00:17:06.751 "seek_hole": false, 00:17:06.751 "seek_data": false, 00:17:06.751 "copy": true, 00:17:06.751 "nvme_iov_md": false 00:17:06.751 }, 00:17:06.751 "memory_domains": [ 00:17:06.751 { 00:17:06.751 "dma_device_id": "system", 00:17:06.751 "dma_device_type": 1 00:17:06.751 }, 00:17:06.751 { 00:17:06.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.751 "dma_device_type": 2 00:17:06.751 } 00:17:06.751 ], 00:17:06.751 "driver_specific": { 00:17:06.751 "passthru": { 00:17:06.751 "name": "pt2", 00:17:06.751 "base_bdev_name": "malloc2" 00:17:06.751 } 00:17:06.751 } 00:17:06.751 }' 00:17:06.751 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:06.751 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:07.010 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:07.010 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.010 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:07.010 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:07.010 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.010 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:07.010 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:07.010 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.268 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:07.268 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:07.268 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:07.268 11:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:07.526 [2024-07-13 11:28:42.073825] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3328a8cf-845d-408a-a3d8-c25147d8a2e8 '!=' 3328a8cf-845d-408a-a3d8-c25147d8a2e8 ']' 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:07.526 [2024-07-13 11:28:42.253674] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.526 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.784 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.784 "name": "raid_bdev1", 00:17:07.784 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:17:07.784 "strip_size_kb": 0, 00:17:07.784 "state": "online", 00:17:07.784 "raid_level": "raid1", 00:17:07.784 "superblock": true, 00:17:07.784 "num_base_bdevs": 2, 00:17:07.784 "num_base_bdevs_discovered": 1, 00:17:07.784 "num_base_bdevs_operational": 1, 00:17:07.784 "base_bdevs_list": [ 00:17:07.784 { 00:17:07.784 "name": null, 00:17:07.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.784 "is_configured": false, 00:17:07.784 "data_offset": 2048, 00:17:07.784 "data_size": 63488 00:17:07.784 }, 00:17:07.784 { 00:17:07.784 "name": "pt2", 00:17:07.784 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.784 "is_configured": true, 00:17:07.784 "data_offset": 2048, 00:17:07.784 "data_size": 63488 00:17:07.784 } 00:17:07.784 ] 00:17:07.784 }' 00:17:07.784 11:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.784 11:28:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.731 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:08.731 [2024-07-13 11:28:43.334030] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.731 [2024-07-13 11:28:43.334198] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.731 [2024-07-13 11:28:43.334390] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.731 [2024-07-13 11:28:43.334553] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.731 [2024-07-13 11:28:43.334658] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:17:08.731 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.731 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:17:08.989 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.247 [2024-07-13 11:28:43.902206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.247 [2024-07-13 11:28:43.902408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.247 [2024-07-13 11:28:43.902487] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:09.247 [2024-07-13 11:28:43.902744] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.247 [2024-07-13 11:28:43.905117] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.247 [2024-07-13 11:28:43.905322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.247 [2024-07-13 11:28:43.905532] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:09.247 [2024-07-13 11:28:43.905701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.247 [2024-07-13 11:28:43.905953] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:17:09.247 [2024-07-13 11:28:43.906087] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:09.247 [2024-07-13 11:28:43.906220] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:09.247 [2024-07-13 11:28:43.906673] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:17:09.247 [2024-07-13 11:28:43.906775] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:17:09.247 [2024-07-13 11:28:43.907038] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.247 pt2 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.247 11:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.505 11:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.505 "name": "raid_bdev1", 00:17:09.505 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:17:09.505 "strip_size_kb": 0, 00:17:09.505 "state": "online", 00:17:09.505 "raid_level": "raid1", 00:17:09.505 "superblock": true, 00:17:09.505 "num_base_bdevs": 2, 00:17:09.505 "num_base_bdevs_discovered": 1, 00:17:09.505 "num_base_bdevs_operational": 1, 00:17:09.505 "base_bdevs_list": [ 00:17:09.505 { 00:17:09.505 "name": null, 00:17:09.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.505 "is_configured": false, 00:17:09.505 "data_offset": 2048, 00:17:09.505 "data_size": 63488 00:17:09.505 }, 00:17:09.505 { 00:17:09.505 "name": "pt2", 00:17:09.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.505 "is_configured": true, 00:17:09.505 "data_offset": 2048, 00:17:09.505 "data_size": 63488 00:17:09.505 } 00:17:09.505 ] 00:17:09.505 }' 00:17:09.505 11:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.505 11:28:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:10.071 11:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:10.329 [2024-07-13 11:28:44.987751] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.329 [2024-07-13 11:28:44.987900] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:10.329 [2024-07-13 11:28:44.988087] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.329 [2024-07-13 11:28:44.988233] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.329 [2024-07-13 11:28:44.988336] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:17:10.329 11:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.329 11:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:10.587 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:10.587 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:10.587 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:10.587 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:10.846 [2024-07-13 11:28:45.431591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:10.846 [2024-07-13 11:28:45.431817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.846 [2024-07-13 11:28:45.431889] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:10.846 [2024-07-13 11:28:45.432154] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.846 [2024-07-13 11:28:45.434220] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.846 [2024-07-13 11:28:45.434401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:10.846 [2024-07-13 11:28:45.434615] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:10.846 [2024-07-13 11:28:45.434773] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:10.846 [2024-07-13 11:28:45.435074] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:10.846 [2024-07-13 11:28:45.435263] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:10.846 [2024-07-13 11:28:45.435308] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:17:10.846 [2024-07-13 11:28:45.435566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.846 [2024-07-13 11:28:45.435782] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:17:10.846 [2024-07-13 11:28:45.435888] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:10.846 [2024-07-13 11:28:45.436017] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:10.846 [2024-07-13 11:28:45.436442] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:17:10.846 [2024-07-13 11:28:45.436558] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:17:10.846 [2024-07-13 11:28:45.436822] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.846 pt1 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.846 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.104 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.104 "name": "raid_bdev1", 00:17:11.104 "uuid": "3328a8cf-845d-408a-a3d8-c25147d8a2e8", 00:17:11.104 "strip_size_kb": 0, 00:17:11.104 "state": "online", 00:17:11.104 "raid_level": "raid1", 00:17:11.104 "superblock": true, 00:17:11.104 "num_base_bdevs": 2, 00:17:11.104 "num_base_bdevs_discovered": 1, 00:17:11.104 "num_base_bdevs_operational": 1, 00:17:11.104 "base_bdevs_list": [ 00:17:11.104 { 00:17:11.104 "name": null, 00:17:11.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.104 "is_configured": false, 00:17:11.104 "data_offset": 2048, 00:17:11.104 "data_size": 63488 00:17:11.104 }, 00:17:11.104 { 00:17:11.104 "name": "pt2", 00:17:11.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:11.104 "is_configured": true, 00:17:11.104 "data_offset": 2048, 00:17:11.104 "data_size": 63488 00:17:11.104 } 00:17:11.104 ] 00:17:11.104 }' 00:17:11.104 11:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.104 11:28:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.670 11:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:11.670 11:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:11.928 11:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:11.928 11:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:11.928 11:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:12.187 [2024-07-13 11:28:46.784317] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3328a8cf-845d-408a-a3d8-c25147d8a2e8 '!=' 3328a8cf-845d-408a-a3d8-c25147d8a2e8 ']' 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 124277 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 124277 ']' 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 124277 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124277 00:17:12.187 killing process with pid 124277 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124277' 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 124277 00:17:12.187 11:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 124277 00:17:12.187 [2024-07-13 11:28:46.816100] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:12.187 [2024-07-13 11:28:46.816207] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.187 [2024-07-13 11:28:46.816256] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:12.187 [2024-07-13 11:28:46.816266] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:17:12.446 [2024-07-13 11:28:46.943394] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.381 ************************************ 00:17:13.381 END TEST raid_superblock_test 00:17:13.381 ************************************ 00:17:13.381 11:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:13.381 00:17:13.381 real 0m15.845s 00:17:13.381 user 0m29.348s 00:17:13.381 sys 0m1.777s 00:17:13.381 11:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.381 11:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.381 11:28:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:13.381 11:28:47 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:17:13.381 11:28:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:13.381 11:28:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:13.381 11:28:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.381 ************************************ 00:17:13.381 START TEST raid_read_error_test 00:17:13.381 ************************************ 00:17:13.381 11:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:17:13.381 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:13.381 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:13.381 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:13.381 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:13.381 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:13.381 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:13.381 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.77susfoL5E 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=124822 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 124822 /var/tmp/spdk-raid.sock 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:13.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 124822 ']' 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.382 11:28:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.382 [2024-07-13 11:28:47.987158] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:13.382 [2024-07-13 11:28:47.987580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124822 ] 00:17:13.640 [2024-07-13 11:28:48.134342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.640 [2024-07-13 11:28:48.313607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.899 [2024-07-13 11:28:48.478192] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:14.466 11:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.466 11:28:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:14.466 11:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:14.466 11:28:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:14.725 BaseBdev1_malloc 00:17:14.725 11:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:14.725 true 00:17:14.725 11:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:14.983 [2024-07-13 11:28:49.623949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:14.983 [2024-07-13 11:28:49.624195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.983 [2024-07-13 11:28:49.624369] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:14.983 [2024-07-13 11:28:49.624497] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.983 [2024-07-13 11:28:49.627081] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.983 [2024-07-13 11:28:49.627258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:14.983 BaseBdev1 00:17:14.983 11:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:14.983 11:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:15.242 BaseBdev2_malloc 00:17:15.242 11:28:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:15.501 true 00:17:15.501 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:15.760 [2024-07-13 11:28:50.270749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:15.760 [2024-07-13 11:28:50.270980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.760 [2024-07-13 11:28:50.271124] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:15.760 [2024-07-13 11:28:50.271242] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.760 [2024-07-13 11:28:50.273493] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.760 [2024-07-13 11:28:50.273657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:15.760 BaseBdev2 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:15.760 [2024-07-13 11:28:50.494825] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.760 [2024-07-13 11:28:50.496841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.760 [2024-07-13 11:28:50.497200] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:15.760 [2024-07-13 11:28:50.497316] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:15.760 [2024-07-13 11:28:50.497456] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:15.760 [2024-07-13 11:28:50.497940] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:15.760 [2024-07-13 11:28:50.498077] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:15.760 [2024-07-13 11:28:50.498301] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.760 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.019 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.019 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.019 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.019 "name": "raid_bdev1", 00:17:16.019 "uuid": "b72330ca-4281-4f84-8390-3df7eee84705", 00:17:16.019 "strip_size_kb": 0, 00:17:16.019 "state": "online", 00:17:16.019 "raid_level": "raid1", 00:17:16.019 "superblock": true, 00:17:16.019 "num_base_bdevs": 2, 00:17:16.019 "num_base_bdevs_discovered": 2, 00:17:16.019 "num_base_bdevs_operational": 2, 00:17:16.019 "base_bdevs_list": [ 00:17:16.019 { 00:17:16.019 "name": "BaseBdev1", 00:17:16.019 "uuid": "2d6ca329-1dde-53af-9aa0-dd1d9ac4da89", 00:17:16.019 "is_configured": true, 00:17:16.019 "data_offset": 2048, 00:17:16.019 "data_size": 63488 00:17:16.019 }, 00:17:16.019 { 00:17:16.019 "name": "BaseBdev2", 00:17:16.019 "uuid": "f70f64da-a3bb-587b-8cb9-b10c5fd5352e", 00:17:16.019 "is_configured": true, 00:17:16.019 "data_offset": 2048, 00:17:16.019 "data_size": 63488 00:17:16.019 } 00:17:16.019 ] 00:17:16.019 }' 00:17:16.019 11:28:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.019 11:28:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.587 11:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:16.587 11:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:16.846 [2024-07-13 11:28:51.400081] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:17.783 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:18.041 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.042 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.311 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.311 "name": "raid_bdev1", 00:17:18.312 "uuid": "b72330ca-4281-4f84-8390-3df7eee84705", 00:17:18.312 "strip_size_kb": 0, 00:17:18.312 "state": "online", 00:17:18.312 "raid_level": "raid1", 00:17:18.312 "superblock": true, 00:17:18.312 "num_base_bdevs": 2, 00:17:18.312 "num_base_bdevs_discovered": 2, 00:17:18.312 "num_base_bdevs_operational": 2, 00:17:18.312 "base_bdevs_list": [ 00:17:18.312 { 00:17:18.312 "name": "BaseBdev1", 00:17:18.312 "uuid": "2d6ca329-1dde-53af-9aa0-dd1d9ac4da89", 00:17:18.312 "is_configured": true, 00:17:18.312 "data_offset": 2048, 00:17:18.312 "data_size": 63488 00:17:18.312 }, 00:17:18.312 { 00:17:18.312 "name": "BaseBdev2", 00:17:18.312 "uuid": "f70f64da-a3bb-587b-8cb9-b10c5fd5352e", 00:17:18.312 "is_configured": true, 00:17:18.312 "data_offset": 2048, 00:17:18.312 "data_size": 63488 00:17:18.312 } 00:17:18.312 ] 00:17:18.312 }' 00:17:18.312 11:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.312 11:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.881 11:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:19.139 [2024-07-13 11:28:53.702270] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.139 [2024-07-13 11:28:53.702626] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.139 [2024-07-13 11:28:53.705293] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.139 [2024-07-13 11:28:53.705473] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.139 [2024-07-13 11:28:53.705596] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.139 [2024-07-13 11:28:53.705785] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:19.139 0 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 124822 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 124822 ']' 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 124822 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124822 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124822' 00:17:19.139 killing process with pid 124822 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 124822 00:17:19.139 11:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 124822 00:17:19.139 [2024-07-13 11:28:53.730732] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:19.139 [2024-07-13 11:28:53.819638] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.77susfoL5E 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:20.516 ************************************ 00:17:20.516 END TEST raid_read_error_test 00:17:20.516 ************************************ 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:20.516 00:17:20.516 real 0m6.973s 00:17:20.516 user 0m10.609s 00:17:20.516 sys 0m0.720s 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:20.516 11:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.516 11:28:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:20.516 11:28:54 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:17:20.516 11:28:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:20.516 11:28:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.516 11:28:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.516 ************************************ 00:17:20.516 START TEST raid_write_error_test 00:17:20.516 ************************************ 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.P5ua7jnkJC 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=125032 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 125032 /var/tmp/spdk-raid.sock 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 125032 ']' 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:20.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.516 11:28:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.516 [2024-07-13 11:28:55.030596] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:20.516 [2024-07-13 11:28:55.030813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125032 ] 00:17:20.516 [2024-07-13 11:28:55.198090] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.776 [2024-07-13 11:28:55.395142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.035 [2024-07-13 11:28:55.581686] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.294 11:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.294 11:28:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:21.294 11:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:21.294 11:28:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:21.552 BaseBdev1_malloc 00:17:21.552 11:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:21.811 true 00:17:21.811 11:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:21.811 [2024-07-13 11:28:56.527514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:21.811 [2024-07-13 11:28:56.527618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.811 [2024-07-13 11:28:56.527654] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:21.811 [2024-07-13 11:28:56.527675] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.811 [2024-07-13 11:28:56.529550] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.811 [2024-07-13 11:28:56.529596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:21.811 BaseBdev1 00:17:21.811 11:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:21.811 11:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:22.069 BaseBdev2_malloc 00:17:22.069 11:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:22.327 true 00:17:22.327 11:28:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:22.586 [2024-07-13 11:28:57.120736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:22.586 [2024-07-13 11:28:57.120825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.586 [2024-07-13 11:28:57.120863] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:22.586 [2024-07-13 11:28:57.120883] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.586 [2024-07-13 11:28:57.123093] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.586 [2024-07-13 11:28:57.123140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:22.586 BaseBdev2 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:17:22.586 [2024-07-13 11:28:57.304801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:22.586 [2024-07-13 11:28:57.306703] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:22.586 [2024-07-13 11:28:57.306935] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:22.586 [2024-07-13 11:28:57.306950] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:22.586 [2024-07-13 11:28:57.307058] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:22.586 [2024-07-13 11:28:57.307416] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:22.586 [2024-07-13 11:28:57.307430] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:22.586 [2024-07-13 11:28:57.307560] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.586 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.844 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.844 "name": "raid_bdev1", 00:17:22.844 "uuid": "5a9db651-6886-46c1-a1a1-e539bd055b51", 00:17:22.844 "strip_size_kb": 0, 00:17:22.844 "state": "online", 00:17:22.844 "raid_level": "raid1", 00:17:22.844 "superblock": true, 00:17:22.844 "num_base_bdevs": 2, 00:17:22.844 "num_base_bdevs_discovered": 2, 00:17:22.844 "num_base_bdevs_operational": 2, 00:17:22.844 "base_bdevs_list": [ 00:17:22.844 { 00:17:22.844 "name": "BaseBdev1", 00:17:22.844 "uuid": "a14afd84-7f3c-5ca8-90fe-7d5acdf93e55", 00:17:22.844 "is_configured": true, 00:17:22.844 "data_offset": 2048, 00:17:22.844 "data_size": 63488 00:17:22.844 }, 00:17:22.844 { 00:17:22.844 "name": "BaseBdev2", 00:17:22.844 "uuid": "ac0d4dd1-0ec8-557a-b8de-03849db43c98", 00:17:22.844 "is_configured": true, 00:17:22.844 "data_offset": 2048, 00:17:22.844 "data_size": 63488 00:17:22.844 } 00:17:22.844 ] 00:17:22.844 }' 00:17:22.844 11:28:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.844 11:28:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.780 11:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:23.781 11:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:23.781 [2024-07-13 11:28:58.282046] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:24.718 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:24.718 [2024-07-13 11:28:59.455320] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:24.718 [2024-07-13 11:28:59.455460] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:24.718 [2024-07-13 11:28:59.455704] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:24.976 "name": "raid_bdev1", 00:17:24.976 "uuid": "5a9db651-6886-46c1-a1a1-e539bd055b51", 00:17:24.976 "strip_size_kb": 0, 00:17:24.976 "state": "online", 00:17:24.976 "raid_level": "raid1", 00:17:24.976 "superblock": true, 00:17:24.976 "num_base_bdevs": 2, 00:17:24.976 "num_base_bdevs_discovered": 1, 00:17:24.976 "num_base_bdevs_operational": 1, 00:17:24.976 "base_bdevs_list": [ 00:17:24.976 { 00:17:24.976 "name": null, 00:17:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.976 "is_configured": false, 00:17:24.976 "data_offset": 2048, 00:17:24.976 "data_size": 63488 00:17:24.976 }, 00:17:24.976 { 00:17:24.976 "name": "BaseBdev2", 00:17:24.976 "uuid": "ac0d4dd1-0ec8-557a-b8de-03849db43c98", 00:17:24.976 "is_configured": true, 00:17:24.976 "data_offset": 2048, 00:17:24.976 "data_size": 63488 00:17:24.976 } 00:17:24.976 ] 00:17:24.976 }' 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:24.976 11:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:25.912 [2024-07-13 11:29:00.580701] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.912 [2024-07-13 11:29:00.580737] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.912 [2024-07-13 11:29:00.583169] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.912 [2024-07-13 11:29:00.583219] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.912 [2024-07-13 11:29:00.583271] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.912 [2024-07-13 11:29:00.583281] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:25.912 0 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 125032 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 125032 ']' 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 125032 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125032 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:25.912 killing process with pid 125032 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125032' 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 125032 00:17:25.912 11:29:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 125032 00:17:25.912 [2024-07-13 11:29:00.613856] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:26.171 [2024-07-13 11:29:00.699825] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.107 11:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.P5ua7jnkJC 00:17:27.107 11:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:27.107 11:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:27.107 11:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:27.107 11:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:27.107 11:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:27.107 11:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:27.107 11:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:27.108 00:17:27.108 real 0m6.827s 00:17:27.108 user 0m10.308s 00:17:27.108 sys 0m0.708s 00:17:27.108 11:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:27.108 11:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.108 ************************************ 00:17:27.108 END TEST raid_write_error_test 00:17:27.108 ************************************ 00:17:27.108 11:29:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:27.108 11:29:01 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:17:27.108 11:29:01 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:27.108 11:29:01 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:17:27.108 11:29:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:27.108 11:29:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:27.108 11:29:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.108 ************************************ 00:17:27.108 START TEST raid_state_function_test 00:17:27.108 ************************************ 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=125213 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 125213' 00:17:27.108 Process raid pid: 125213 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 125213 /var/tmp/spdk-raid.sock 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 125213 ']' 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:27.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:27.108 11:29:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.366 [2024-07-13 11:29:01.896548] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:27.366 [2024-07-13 11:29:01.896711] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.366 [2024-07-13 11:29:02.049803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.626 [2024-07-13 11:29:02.242605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.885 [2024-07-13 11:29:02.456676] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.143 11:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.143 11:29:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:17:28.143 11:29:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:28.401 [2024-07-13 11:29:03.076573] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:28.401 [2024-07-13 11:29:03.076673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:28.401 [2024-07-13 11:29:03.076687] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:28.401 [2024-07-13 11:29:03.076715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:28.401 [2024-07-13 11:29:03.076724] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:28.401 [2024-07-13 11:29:03.076739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.401 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.658 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.658 "name": "Existed_Raid", 00:17:28.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.658 "strip_size_kb": 64, 00:17:28.658 "state": "configuring", 00:17:28.658 "raid_level": "raid0", 00:17:28.658 "superblock": false, 00:17:28.658 "num_base_bdevs": 3, 00:17:28.658 "num_base_bdevs_discovered": 0, 00:17:28.658 "num_base_bdevs_operational": 3, 00:17:28.658 "base_bdevs_list": [ 00:17:28.658 { 00:17:28.658 "name": "BaseBdev1", 00:17:28.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.658 "is_configured": false, 00:17:28.658 "data_offset": 0, 00:17:28.658 "data_size": 0 00:17:28.659 }, 00:17:28.659 { 00:17:28.659 "name": "BaseBdev2", 00:17:28.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.659 "is_configured": false, 00:17:28.659 "data_offset": 0, 00:17:28.659 "data_size": 0 00:17:28.659 }, 00:17:28.659 { 00:17:28.659 "name": "BaseBdev3", 00:17:28.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.659 "is_configured": false, 00:17:28.659 "data_offset": 0, 00:17:28.659 "data_size": 0 00:17:28.659 } 00:17:28.659 ] 00:17:28.659 }' 00:17:28.659 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.659 11:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.223 11:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:29.481 [2024-07-13 11:29:04.096655] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:29.481 [2024-07-13 11:29:04.096693] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:29.481 11:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:29.738 [2024-07-13 11:29:04.372719] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:29.738 [2024-07-13 11:29:04.372775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:29.738 [2024-07-13 11:29:04.372803] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.738 [2024-07-13 11:29:04.372821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.738 [2024-07-13 11:29:04.372828] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:29.738 [2024-07-13 11:29:04.372851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:29.738 11:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:29.996 [2024-07-13 11:29:04.590172] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.996 BaseBdev1 00:17:29.996 11:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:29.996 11:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:29.996 11:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:29.996 11:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:29.996 11:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:29.996 11:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:29.996 11:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.254 11:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.511 [ 00:17:30.511 { 00:17:30.511 "name": "BaseBdev1", 00:17:30.511 "aliases": [ 00:17:30.511 "339aa667-4d00-49ea-91a8-f8e8c668351c" 00:17:30.511 ], 00:17:30.511 "product_name": "Malloc disk", 00:17:30.511 "block_size": 512, 00:17:30.511 "num_blocks": 65536, 00:17:30.511 "uuid": "339aa667-4d00-49ea-91a8-f8e8c668351c", 00:17:30.511 "assigned_rate_limits": { 00:17:30.511 "rw_ios_per_sec": 0, 00:17:30.511 "rw_mbytes_per_sec": 0, 00:17:30.511 "r_mbytes_per_sec": 0, 00:17:30.511 "w_mbytes_per_sec": 0 00:17:30.511 }, 00:17:30.511 "claimed": true, 00:17:30.511 "claim_type": "exclusive_write", 00:17:30.511 "zoned": false, 00:17:30.511 "supported_io_types": { 00:17:30.511 "read": true, 00:17:30.511 "write": true, 00:17:30.511 "unmap": true, 00:17:30.511 "flush": true, 00:17:30.511 "reset": true, 00:17:30.511 "nvme_admin": false, 00:17:30.511 "nvme_io": false, 00:17:30.511 "nvme_io_md": false, 00:17:30.511 "write_zeroes": true, 00:17:30.511 "zcopy": true, 00:17:30.511 "get_zone_info": false, 00:17:30.511 "zone_management": false, 00:17:30.511 "zone_append": false, 00:17:30.511 "compare": false, 00:17:30.511 "compare_and_write": false, 00:17:30.511 "abort": true, 00:17:30.511 "seek_hole": false, 00:17:30.511 "seek_data": false, 00:17:30.511 "copy": true, 00:17:30.511 "nvme_iov_md": false 00:17:30.511 }, 00:17:30.511 "memory_domains": [ 00:17:30.511 { 00:17:30.511 "dma_device_id": "system", 00:17:30.511 "dma_device_type": 1 00:17:30.511 }, 00:17:30.511 { 00:17:30.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.511 "dma_device_type": 2 00:17:30.511 } 00:17:30.511 ], 00:17:30.511 "driver_specific": {} 00:17:30.511 } 00:17:30.511 ] 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.511 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.769 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.769 "name": "Existed_Raid", 00:17:30.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.769 "strip_size_kb": 64, 00:17:30.769 "state": "configuring", 00:17:30.769 "raid_level": "raid0", 00:17:30.769 "superblock": false, 00:17:30.769 "num_base_bdevs": 3, 00:17:30.769 "num_base_bdevs_discovered": 1, 00:17:30.769 "num_base_bdevs_operational": 3, 00:17:30.769 "base_bdevs_list": [ 00:17:30.769 { 00:17:30.769 "name": "BaseBdev1", 00:17:30.769 "uuid": "339aa667-4d00-49ea-91a8-f8e8c668351c", 00:17:30.769 "is_configured": true, 00:17:30.769 "data_offset": 0, 00:17:30.769 "data_size": 65536 00:17:30.769 }, 00:17:30.769 { 00:17:30.769 "name": "BaseBdev2", 00:17:30.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.769 "is_configured": false, 00:17:30.769 "data_offset": 0, 00:17:30.769 "data_size": 0 00:17:30.769 }, 00:17:30.769 { 00:17:30.769 "name": "BaseBdev3", 00:17:30.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.769 "is_configured": false, 00:17:30.769 "data_offset": 0, 00:17:30.769 "data_size": 0 00:17:30.769 } 00:17:30.769 ] 00:17:30.769 }' 00:17:30.769 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.769 11:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.335 11:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:31.592 [2024-07-13 11:29:06.114527] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.592 [2024-07-13 11:29:06.114591] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:17:31.592 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:31.849 [2024-07-13 11:29:06.358617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.849 [2024-07-13 11:29:06.360519] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.849 [2024-07-13 11:29:06.360593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.849 [2024-07-13 11:29:06.360621] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:31.849 [2024-07-13 11:29:06.360656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:31.849 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:31.849 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:31.849 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:31.849 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:31.849 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:31.849 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:31.849 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:31.849 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:31.849 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:31.850 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:31.850 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:31.850 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:31.850 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.850 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.107 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.107 "name": "Existed_Raid", 00:17:32.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.107 "strip_size_kb": 64, 00:17:32.107 "state": "configuring", 00:17:32.107 "raid_level": "raid0", 00:17:32.107 "superblock": false, 00:17:32.107 "num_base_bdevs": 3, 00:17:32.107 "num_base_bdevs_discovered": 1, 00:17:32.107 "num_base_bdevs_operational": 3, 00:17:32.107 "base_bdevs_list": [ 00:17:32.107 { 00:17:32.107 "name": "BaseBdev1", 00:17:32.107 "uuid": "339aa667-4d00-49ea-91a8-f8e8c668351c", 00:17:32.107 "is_configured": true, 00:17:32.107 "data_offset": 0, 00:17:32.107 "data_size": 65536 00:17:32.107 }, 00:17:32.107 { 00:17:32.107 "name": "BaseBdev2", 00:17:32.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.107 "is_configured": false, 00:17:32.107 "data_offset": 0, 00:17:32.107 "data_size": 0 00:17:32.107 }, 00:17:32.107 { 00:17:32.107 "name": "BaseBdev3", 00:17:32.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.107 "is_configured": false, 00:17:32.107 "data_offset": 0, 00:17:32.107 "data_size": 0 00:17:32.107 } 00:17:32.107 ] 00:17:32.107 }' 00:17:32.107 11:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.107 11:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.671 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:32.930 [2024-07-13 11:29:07.643603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.930 BaseBdev2 00:17:32.930 11:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:32.930 11:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:32.930 11:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:32.930 11:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:32.930 11:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:32.930 11:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:32.930 11:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.201 11:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:33.489 [ 00:17:33.489 { 00:17:33.489 "name": "BaseBdev2", 00:17:33.489 "aliases": [ 00:17:33.489 "8d7f9064-f7bc-4436-b7a9-4e6d2a150aa5" 00:17:33.489 ], 00:17:33.489 "product_name": "Malloc disk", 00:17:33.489 "block_size": 512, 00:17:33.489 "num_blocks": 65536, 00:17:33.489 "uuid": "8d7f9064-f7bc-4436-b7a9-4e6d2a150aa5", 00:17:33.489 "assigned_rate_limits": { 00:17:33.489 "rw_ios_per_sec": 0, 00:17:33.489 "rw_mbytes_per_sec": 0, 00:17:33.489 "r_mbytes_per_sec": 0, 00:17:33.489 "w_mbytes_per_sec": 0 00:17:33.489 }, 00:17:33.489 "claimed": true, 00:17:33.489 "claim_type": "exclusive_write", 00:17:33.489 "zoned": false, 00:17:33.489 "supported_io_types": { 00:17:33.489 "read": true, 00:17:33.489 "write": true, 00:17:33.489 "unmap": true, 00:17:33.489 "flush": true, 00:17:33.489 "reset": true, 00:17:33.489 "nvme_admin": false, 00:17:33.489 "nvme_io": false, 00:17:33.489 "nvme_io_md": false, 00:17:33.489 "write_zeroes": true, 00:17:33.489 "zcopy": true, 00:17:33.489 "get_zone_info": false, 00:17:33.489 "zone_management": false, 00:17:33.489 "zone_append": false, 00:17:33.489 "compare": false, 00:17:33.489 "compare_and_write": false, 00:17:33.489 "abort": true, 00:17:33.489 "seek_hole": false, 00:17:33.489 "seek_data": false, 00:17:33.489 "copy": true, 00:17:33.489 "nvme_iov_md": false 00:17:33.489 }, 00:17:33.489 "memory_domains": [ 00:17:33.489 { 00:17:33.489 "dma_device_id": "system", 00:17:33.489 "dma_device_type": 1 00:17:33.489 }, 00:17:33.489 { 00:17:33.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:33.489 "dma_device_type": 2 00:17:33.489 } 00:17:33.489 ], 00:17:33.489 "driver_specific": {} 00:17:33.489 } 00:17:33.489 ] 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.489 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.759 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:33.759 "name": "Existed_Raid", 00:17:33.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.759 "strip_size_kb": 64, 00:17:33.759 "state": "configuring", 00:17:33.759 "raid_level": "raid0", 00:17:33.759 "superblock": false, 00:17:33.759 "num_base_bdevs": 3, 00:17:33.759 "num_base_bdevs_discovered": 2, 00:17:33.759 "num_base_bdevs_operational": 3, 00:17:33.759 "base_bdevs_list": [ 00:17:33.759 { 00:17:33.759 "name": "BaseBdev1", 00:17:33.759 "uuid": "339aa667-4d00-49ea-91a8-f8e8c668351c", 00:17:33.759 "is_configured": true, 00:17:33.759 "data_offset": 0, 00:17:33.759 "data_size": 65536 00:17:33.759 }, 00:17:33.759 { 00:17:33.759 "name": "BaseBdev2", 00:17:33.759 "uuid": "8d7f9064-f7bc-4436-b7a9-4e6d2a150aa5", 00:17:33.759 "is_configured": true, 00:17:33.759 "data_offset": 0, 00:17:33.759 "data_size": 65536 00:17:33.759 }, 00:17:33.759 { 00:17:33.759 "name": "BaseBdev3", 00:17:33.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.759 "is_configured": false, 00:17:33.759 "data_offset": 0, 00:17:33.759 "data_size": 0 00:17:33.759 } 00:17:33.759 ] 00:17:33.759 }' 00:17:33.759 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:33.759 11:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.336 11:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:34.621 [2024-07-13 11:29:09.195747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:34.621 [2024-07-13 11:29:09.195788] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:17:34.621 [2024-07-13 11:29:09.195797] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:34.621 [2024-07-13 11:29:09.195918] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:34.621 [2024-07-13 11:29:09.196254] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:17:34.621 [2024-07-13 11:29:09.196277] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:17:34.621 [2024-07-13 11:29:09.196482] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.621 BaseBdev3 00:17:34.621 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:34.621 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:34.621 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:34.621 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:34.621 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:34.621 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:34.621 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.879 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:34.879 [ 00:17:34.879 { 00:17:34.879 "name": "BaseBdev3", 00:17:34.879 "aliases": [ 00:17:34.879 "61747b10-633a-4cb8-93b3-bb60635f8931" 00:17:34.879 ], 00:17:34.879 "product_name": "Malloc disk", 00:17:34.879 "block_size": 512, 00:17:34.879 "num_blocks": 65536, 00:17:34.879 "uuid": "61747b10-633a-4cb8-93b3-bb60635f8931", 00:17:34.879 "assigned_rate_limits": { 00:17:34.879 "rw_ios_per_sec": 0, 00:17:34.879 "rw_mbytes_per_sec": 0, 00:17:34.879 "r_mbytes_per_sec": 0, 00:17:34.879 "w_mbytes_per_sec": 0 00:17:34.879 }, 00:17:34.879 "claimed": true, 00:17:34.879 "claim_type": "exclusive_write", 00:17:34.879 "zoned": false, 00:17:34.879 "supported_io_types": { 00:17:34.879 "read": true, 00:17:34.879 "write": true, 00:17:34.879 "unmap": true, 00:17:34.879 "flush": true, 00:17:34.879 "reset": true, 00:17:34.879 "nvme_admin": false, 00:17:34.879 "nvme_io": false, 00:17:34.879 "nvme_io_md": false, 00:17:34.879 "write_zeroes": true, 00:17:34.879 "zcopy": true, 00:17:34.879 "get_zone_info": false, 00:17:34.879 "zone_management": false, 00:17:34.879 "zone_append": false, 00:17:34.880 "compare": false, 00:17:34.880 "compare_and_write": false, 00:17:34.880 "abort": true, 00:17:34.880 "seek_hole": false, 00:17:34.880 "seek_data": false, 00:17:34.880 "copy": true, 00:17:34.880 "nvme_iov_md": false 00:17:34.880 }, 00:17:34.880 "memory_domains": [ 00:17:34.880 { 00:17:34.880 "dma_device_id": "system", 00:17:34.880 "dma_device_type": 1 00:17:34.880 }, 00:17:34.880 { 00:17:34.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.880 "dma_device_type": 2 00:17:34.880 } 00:17:34.880 ], 00:17:34.880 "driver_specific": {} 00:17:34.880 } 00:17:34.880 ] 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.880 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.138 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.138 "name": "Existed_Raid", 00:17:35.138 "uuid": "1632bb3d-7835-4202-9b34-f49f0690ad88", 00:17:35.138 "strip_size_kb": 64, 00:17:35.138 "state": "online", 00:17:35.138 "raid_level": "raid0", 00:17:35.138 "superblock": false, 00:17:35.138 "num_base_bdevs": 3, 00:17:35.138 "num_base_bdevs_discovered": 3, 00:17:35.138 "num_base_bdevs_operational": 3, 00:17:35.138 "base_bdevs_list": [ 00:17:35.138 { 00:17:35.138 "name": "BaseBdev1", 00:17:35.138 "uuid": "339aa667-4d00-49ea-91a8-f8e8c668351c", 00:17:35.138 "is_configured": true, 00:17:35.138 "data_offset": 0, 00:17:35.138 "data_size": 65536 00:17:35.138 }, 00:17:35.138 { 00:17:35.138 "name": "BaseBdev2", 00:17:35.138 "uuid": "8d7f9064-f7bc-4436-b7a9-4e6d2a150aa5", 00:17:35.138 "is_configured": true, 00:17:35.138 "data_offset": 0, 00:17:35.138 "data_size": 65536 00:17:35.138 }, 00:17:35.138 { 00:17:35.138 "name": "BaseBdev3", 00:17:35.138 "uuid": "61747b10-633a-4cb8-93b3-bb60635f8931", 00:17:35.138 "is_configured": true, 00:17:35.138 "data_offset": 0, 00:17:35.138 "data_size": 65536 00:17:35.138 } 00:17:35.138 ] 00:17:35.138 }' 00:17:35.138 11:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.138 11:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.705 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:35.705 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:35.705 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:35.705 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:35.705 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:35.705 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:35.705 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:35.705 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:35.965 [2024-07-13 11:29:10.600378] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.965 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:35.965 "name": "Existed_Raid", 00:17:35.965 "aliases": [ 00:17:35.965 "1632bb3d-7835-4202-9b34-f49f0690ad88" 00:17:35.965 ], 00:17:35.965 "product_name": "Raid Volume", 00:17:35.965 "block_size": 512, 00:17:35.965 "num_blocks": 196608, 00:17:35.965 "uuid": "1632bb3d-7835-4202-9b34-f49f0690ad88", 00:17:35.965 "assigned_rate_limits": { 00:17:35.965 "rw_ios_per_sec": 0, 00:17:35.965 "rw_mbytes_per_sec": 0, 00:17:35.965 "r_mbytes_per_sec": 0, 00:17:35.965 "w_mbytes_per_sec": 0 00:17:35.965 }, 00:17:35.965 "claimed": false, 00:17:35.965 "zoned": false, 00:17:35.965 "supported_io_types": { 00:17:35.965 "read": true, 00:17:35.965 "write": true, 00:17:35.965 "unmap": true, 00:17:35.965 "flush": true, 00:17:35.965 "reset": true, 00:17:35.965 "nvme_admin": false, 00:17:35.965 "nvme_io": false, 00:17:35.965 "nvme_io_md": false, 00:17:35.965 "write_zeroes": true, 00:17:35.965 "zcopy": false, 00:17:35.965 "get_zone_info": false, 00:17:35.965 "zone_management": false, 00:17:35.965 "zone_append": false, 00:17:35.965 "compare": false, 00:17:35.965 "compare_and_write": false, 00:17:35.965 "abort": false, 00:17:35.965 "seek_hole": false, 00:17:35.965 "seek_data": false, 00:17:35.965 "copy": false, 00:17:35.965 "nvme_iov_md": false 00:17:35.965 }, 00:17:35.965 "memory_domains": [ 00:17:35.965 { 00:17:35.965 "dma_device_id": "system", 00:17:35.965 "dma_device_type": 1 00:17:35.965 }, 00:17:35.965 { 00:17:35.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.965 "dma_device_type": 2 00:17:35.965 }, 00:17:35.965 { 00:17:35.965 "dma_device_id": "system", 00:17:35.965 "dma_device_type": 1 00:17:35.965 }, 00:17:35.965 { 00:17:35.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.965 "dma_device_type": 2 00:17:35.965 }, 00:17:35.965 { 00:17:35.965 "dma_device_id": "system", 00:17:35.965 "dma_device_type": 1 00:17:35.965 }, 00:17:35.965 { 00:17:35.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.965 "dma_device_type": 2 00:17:35.965 } 00:17:35.965 ], 00:17:35.965 "driver_specific": { 00:17:35.965 "raid": { 00:17:35.965 "uuid": "1632bb3d-7835-4202-9b34-f49f0690ad88", 00:17:35.965 "strip_size_kb": 64, 00:17:35.965 "state": "online", 00:17:35.965 "raid_level": "raid0", 00:17:35.965 "superblock": false, 00:17:35.965 "num_base_bdevs": 3, 00:17:35.965 "num_base_bdevs_discovered": 3, 00:17:35.965 "num_base_bdevs_operational": 3, 00:17:35.965 "base_bdevs_list": [ 00:17:35.965 { 00:17:35.965 "name": "BaseBdev1", 00:17:35.965 "uuid": "339aa667-4d00-49ea-91a8-f8e8c668351c", 00:17:35.965 "is_configured": true, 00:17:35.965 "data_offset": 0, 00:17:35.965 "data_size": 65536 00:17:35.965 }, 00:17:35.965 { 00:17:35.965 "name": "BaseBdev2", 00:17:35.965 "uuid": "8d7f9064-f7bc-4436-b7a9-4e6d2a150aa5", 00:17:35.965 "is_configured": true, 00:17:35.965 "data_offset": 0, 00:17:35.965 "data_size": 65536 00:17:35.965 }, 00:17:35.965 { 00:17:35.965 "name": "BaseBdev3", 00:17:35.965 "uuid": "61747b10-633a-4cb8-93b3-bb60635f8931", 00:17:35.965 "is_configured": true, 00:17:35.965 "data_offset": 0, 00:17:35.965 "data_size": 65536 00:17:35.965 } 00:17:35.965 ] 00:17:35.965 } 00:17:35.965 } 00:17:35.965 }' 00:17:35.965 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:35.965 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:35.965 BaseBdev2 00:17:35.965 BaseBdev3' 00:17:35.965 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:35.965 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:35.965 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:36.224 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:36.224 "name": "BaseBdev1", 00:17:36.224 "aliases": [ 00:17:36.224 "339aa667-4d00-49ea-91a8-f8e8c668351c" 00:17:36.224 ], 00:17:36.224 "product_name": "Malloc disk", 00:17:36.224 "block_size": 512, 00:17:36.224 "num_blocks": 65536, 00:17:36.224 "uuid": "339aa667-4d00-49ea-91a8-f8e8c668351c", 00:17:36.224 "assigned_rate_limits": { 00:17:36.225 "rw_ios_per_sec": 0, 00:17:36.225 "rw_mbytes_per_sec": 0, 00:17:36.225 "r_mbytes_per_sec": 0, 00:17:36.225 "w_mbytes_per_sec": 0 00:17:36.225 }, 00:17:36.225 "claimed": true, 00:17:36.225 "claim_type": "exclusive_write", 00:17:36.225 "zoned": false, 00:17:36.225 "supported_io_types": { 00:17:36.225 "read": true, 00:17:36.225 "write": true, 00:17:36.225 "unmap": true, 00:17:36.225 "flush": true, 00:17:36.225 "reset": true, 00:17:36.225 "nvme_admin": false, 00:17:36.225 "nvme_io": false, 00:17:36.225 "nvme_io_md": false, 00:17:36.225 "write_zeroes": true, 00:17:36.225 "zcopy": true, 00:17:36.225 "get_zone_info": false, 00:17:36.225 "zone_management": false, 00:17:36.225 "zone_append": false, 00:17:36.225 "compare": false, 00:17:36.225 "compare_and_write": false, 00:17:36.225 "abort": true, 00:17:36.225 "seek_hole": false, 00:17:36.225 "seek_data": false, 00:17:36.225 "copy": true, 00:17:36.225 "nvme_iov_md": false 00:17:36.225 }, 00:17:36.225 "memory_domains": [ 00:17:36.225 { 00:17:36.225 "dma_device_id": "system", 00:17:36.225 "dma_device_type": 1 00:17:36.225 }, 00:17:36.225 { 00:17:36.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.225 "dma_device_type": 2 00:17:36.225 } 00:17:36.225 ], 00:17:36.225 "driver_specific": {} 00:17:36.225 }' 00:17:36.225 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.225 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.225 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:36.225 11:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.483 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.483 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:36.483 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.483 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.483 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:36.483 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.742 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.742 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:36.743 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:36.743 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:36.743 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:36.743 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:36.743 "name": "BaseBdev2", 00:17:36.743 "aliases": [ 00:17:36.743 "8d7f9064-f7bc-4436-b7a9-4e6d2a150aa5" 00:17:36.743 ], 00:17:36.743 "product_name": "Malloc disk", 00:17:36.743 "block_size": 512, 00:17:36.743 "num_blocks": 65536, 00:17:36.743 "uuid": "8d7f9064-f7bc-4436-b7a9-4e6d2a150aa5", 00:17:36.743 "assigned_rate_limits": { 00:17:36.743 "rw_ios_per_sec": 0, 00:17:36.743 "rw_mbytes_per_sec": 0, 00:17:36.743 "r_mbytes_per_sec": 0, 00:17:36.743 "w_mbytes_per_sec": 0 00:17:36.743 }, 00:17:36.743 "claimed": true, 00:17:36.743 "claim_type": "exclusive_write", 00:17:36.743 "zoned": false, 00:17:36.743 "supported_io_types": { 00:17:36.743 "read": true, 00:17:36.743 "write": true, 00:17:36.743 "unmap": true, 00:17:36.743 "flush": true, 00:17:36.743 "reset": true, 00:17:36.743 "nvme_admin": false, 00:17:36.743 "nvme_io": false, 00:17:36.743 "nvme_io_md": false, 00:17:36.743 "write_zeroes": true, 00:17:36.743 "zcopy": true, 00:17:36.743 "get_zone_info": false, 00:17:36.743 "zone_management": false, 00:17:36.743 "zone_append": false, 00:17:36.743 "compare": false, 00:17:36.743 "compare_and_write": false, 00:17:36.743 "abort": true, 00:17:36.743 "seek_hole": false, 00:17:36.743 "seek_data": false, 00:17:36.743 "copy": true, 00:17:36.743 "nvme_iov_md": false 00:17:36.743 }, 00:17:36.743 "memory_domains": [ 00:17:36.743 { 00:17:36.743 "dma_device_id": "system", 00:17:36.743 "dma_device_type": 1 00:17:36.743 }, 00:17:36.743 { 00:17:36.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.743 "dma_device_type": 2 00:17:36.743 } 00:17:36.743 ], 00:17:36.743 "driver_specific": {} 00:17:36.743 }' 00:17:36.743 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.001 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.001 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:37.001 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.002 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.002 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:37.002 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.261 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.261 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:37.261 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.261 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.261 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:37.261 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:37.261 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:37.261 11:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.520 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.520 "name": "BaseBdev3", 00:17:37.520 "aliases": [ 00:17:37.520 "61747b10-633a-4cb8-93b3-bb60635f8931" 00:17:37.520 ], 00:17:37.520 "product_name": "Malloc disk", 00:17:37.520 "block_size": 512, 00:17:37.520 "num_blocks": 65536, 00:17:37.520 "uuid": "61747b10-633a-4cb8-93b3-bb60635f8931", 00:17:37.520 "assigned_rate_limits": { 00:17:37.520 "rw_ios_per_sec": 0, 00:17:37.520 "rw_mbytes_per_sec": 0, 00:17:37.520 "r_mbytes_per_sec": 0, 00:17:37.520 "w_mbytes_per_sec": 0 00:17:37.520 }, 00:17:37.520 "claimed": true, 00:17:37.520 "claim_type": "exclusive_write", 00:17:37.520 "zoned": false, 00:17:37.520 "supported_io_types": { 00:17:37.520 "read": true, 00:17:37.520 "write": true, 00:17:37.520 "unmap": true, 00:17:37.520 "flush": true, 00:17:37.520 "reset": true, 00:17:37.520 "nvme_admin": false, 00:17:37.520 "nvme_io": false, 00:17:37.520 "nvme_io_md": false, 00:17:37.520 "write_zeroes": true, 00:17:37.520 "zcopy": true, 00:17:37.520 "get_zone_info": false, 00:17:37.520 "zone_management": false, 00:17:37.520 "zone_append": false, 00:17:37.520 "compare": false, 00:17:37.520 "compare_and_write": false, 00:17:37.520 "abort": true, 00:17:37.520 "seek_hole": false, 00:17:37.520 "seek_data": false, 00:17:37.520 "copy": true, 00:17:37.520 "nvme_iov_md": false 00:17:37.520 }, 00:17:37.520 "memory_domains": [ 00:17:37.520 { 00:17:37.520 "dma_device_id": "system", 00:17:37.520 "dma_device_type": 1 00:17:37.520 }, 00:17:37.520 { 00:17:37.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.520 "dma_device_type": 2 00:17:37.520 } 00:17:37.520 ], 00:17:37.520 "driver_specific": {} 00:17:37.520 }' 00:17:37.520 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.520 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.520 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:37.520 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.779 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.779 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:37.779 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.779 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.779 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:37.779 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.038 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.038 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:38.038 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:38.295 [2024-07-13 11:29:12.817275] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:38.295 [2024-07-13 11:29:12.817308] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.295 [2024-07-13 11:29:12.817380] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.295 11:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.553 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.553 "name": "Existed_Raid", 00:17:38.553 "uuid": "1632bb3d-7835-4202-9b34-f49f0690ad88", 00:17:38.553 "strip_size_kb": 64, 00:17:38.553 "state": "offline", 00:17:38.553 "raid_level": "raid0", 00:17:38.553 "superblock": false, 00:17:38.553 "num_base_bdevs": 3, 00:17:38.553 "num_base_bdevs_discovered": 2, 00:17:38.553 "num_base_bdevs_operational": 2, 00:17:38.553 "base_bdevs_list": [ 00:17:38.553 { 00:17:38.553 "name": null, 00:17:38.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.553 "is_configured": false, 00:17:38.553 "data_offset": 0, 00:17:38.553 "data_size": 65536 00:17:38.553 }, 00:17:38.553 { 00:17:38.553 "name": "BaseBdev2", 00:17:38.553 "uuid": "8d7f9064-f7bc-4436-b7a9-4e6d2a150aa5", 00:17:38.553 "is_configured": true, 00:17:38.553 "data_offset": 0, 00:17:38.553 "data_size": 65536 00:17:38.553 }, 00:17:38.553 { 00:17:38.553 "name": "BaseBdev3", 00:17:38.553 "uuid": "61747b10-633a-4cb8-93b3-bb60635f8931", 00:17:38.553 "is_configured": true, 00:17:38.553 "data_offset": 0, 00:17:38.553 "data_size": 65536 00:17:38.553 } 00:17:38.553 ] 00:17:38.553 }' 00:17:38.553 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.553 11:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.118 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:39.118 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:39.118 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.118 11:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:39.376 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:39.376 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.376 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:39.633 [2024-07-13 11:29:14.281272] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.633 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:39.633 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:39.633 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:39.633 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:39.891 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:39.891 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:39.891 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:40.149 [2024-07-13 11:29:14.852880] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:40.149 [2024-07-13 11:29:14.852945] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:17:40.407 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:40.407 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:40.407 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.407 11:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:40.666 BaseBdev2 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:40.666 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.925 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:41.183 [ 00:17:41.183 { 00:17:41.183 "name": "BaseBdev2", 00:17:41.183 "aliases": [ 00:17:41.183 "b6a4061f-4bf5-4929-a14a-eafa39b9a848" 00:17:41.183 ], 00:17:41.183 "product_name": "Malloc disk", 00:17:41.183 "block_size": 512, 00:17:41.183 "num_blocks": 65536, 00:17:41.183 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:41.183 "assigned_rate_limits": { 00:17:41.183 "rw_ios_per_sec": 0, 00:17:41.183 "rw_mbytes_per_sec": 0, 00:17:41.183 "r_mbytes_per_sec": 0, 00:17:41.183 "w_mbytes_per_sec": 0 00:17:41.183 }, 00:17:41.183 "claimed": false, 00:17:41.183 "zoned": false, 00:17:41.183 "supported_io_types": { 00:17:41.183 "read": true, 00:17:41.183 "write": true, 00:17:41.183 "unmap": true, 00:17:41.183 "flush": true, 00:17:41.183 "reset": true, 00:17:41.183 "nvme_admin": false, 00:17:41.183 "nvme_io": false, 00:17:41.183 "nvme_io_md": false, 00:17:41.183 "write_zeroes": true, 00:17:41.183 "zcopy": true, 00:17:41.183 "get_zone_info": false, 00:17:41.183 "zone_management": false, 00:17:41.183 "zone_append": false, 00:17:41.183 "compare": false, 00:17:41.183 "compare_and_write": false, 00:17:41.183 "abort": true, 00:17:41.183 "seek_hole": false, 00:17:41.183 "seek_data": false, 00:17:41.183 "copy": true, 00:17:41.183 "nvme_iov_md": false 00:17:41.183 }, 00:17:41.183 "memory_domains": [ 00:17:41.183 { 00:17:41.183 "dma_device_id": "system", 00:17:41.183 "dma_device_type": 1 00:17:41.183 }, 00:17:41.183 { 00:17:41.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.183 "dma_device_type": 2 00:17:41.183 } 00:17:41.183 ], 00:17:41.183 "driver_specific": {} 00:17:41.183 } 00:17:41.183 ] 00:17:41.183 11:29:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:41.183 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:41.183 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:41.183 11:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:41.442 BaseBdev3 00:17:41.442 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:41.442 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:41.442 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:41.442 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:41.442 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:41.442 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:41.442 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:41.700 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:41.700 [ 00:17:41.700 { 00:17:41.700 "name": "BaseBdev3", 00:17:41.700 "aliases": [ 00:17:41.700 "432a3e71-4b1e-4213-b961-e345c11089d8" 00:17:41.700 ], 00:17:41.700 "product_name": "Malloc disk", 00:17:41.700 "block_size": 512, 00:17:41.700 "num_blocks": 65536, 00:17:41.700 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:41.700 "assigned_rate_limits": { 00:17:41.700 "rw_ios_per_sec": 0, 00:17:41.700 "rw_mbytes_per_sec": 0, 00:17:41.700 "r_mbytes_per_sec": 0, 00:17:41.700 "w_mbytes_per_sec": 0 00:17:41.700 }, 00:17:41.700 "claimed": false, 00:17:41.700 "zoned": false, 00:17:41.700 "supported_io_types": { 00:17:41.700 "read": true, 00:17:41.700 "write": true, 00:17:41.700 "unmap": true, 00:17:41.700 "flush": true, 00:17:41.700 "reset": true, 00:17:41.700 "nvme_admin": false, 00:17:41.700 "nvme_io": false, 00:17:41.700 "nvme_io_md": false, 00:17:41.700 "write_zeroes": true, 00:17:41.700 "zcopy": true, 00:17:41.700 "get_zone_info": false, 00:17:41.700 "zone_management": false, 00:17:41.700 "zone_append": false, 00:17:41.700 "compare": false, 00:17:41.700 "compare_and_write": false, 00:17:41.700 "abort": true, 00:17:41.700 "seek_hole": false, 00:17:41.700 "seek_data": false, 00:17:41.700 "copy": true, 00:17:41.700 "nvme_iov_md": false 00:17:41.700 }, 00:17:41.700 "memory_domains": [ 00:17:41.700 { 00:17:41.700 "dma_device_id": "system", 00:17:41.700 "dma_device_type": 1 00:17:41.700 }, 00:17:41.700 { 00:17:41.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:41.700 "dma_device_type": 2 00:17:41.700 } 00:17:41.700 ], 00:17:41.700 "driver_specific": {} 00:17:41.700 } 00:17:41.700 ] 00:17:41.700 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:41.700 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:41.700 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:41.701 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:41.960 [2024-07-13 11:29:16.652325] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:41.960 [2024-07-13 11:29:16.652395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:41.960 [2024-07-13 11:29:16.652458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.960 [2024-07-13 11:29:16.654597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.960 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.218 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:42.218 "name": "Existed_Raid", 00:17:42.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.218 "strip_size_kb": 64, 00:17:42.218 "state": "configuring", 00:17:42.218 "raid_level": "raid0", 00:17:42.218 "superblock": false, 00:17:42.218 "num_base_bdevs": 3, 00:17:42.218 "num_base_bdevs_discovered": 2, 00:17:42.218 "num_base_bdevs_operational": 3, 00:17:42.218 "base_bdevs_list": [ 00:17:42.218 { 00:17:42.218 "name": "BaseBdev1", 00:17:42.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.218 "is_configured": false, 00:17:42.218 "data_offset": 0, 00:17:42.218 "data_size": 0 00:17:42.218 }, 00:17:42.218 { 00:17:42.218 "name": "BaseBdev2", 00:17:42.218 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:42.218 "is_configured": true, 00:17:42.218 "data_offset": 0, 00:17:42.218 "data_size": 65536 00:17:42.218 }, 00:17:42.218 { 00:17:42.218 "name": "BaseBdev3", 00:17:42.218 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:42.218 "is_configured": true, 00:17:42.218 "data_offset": 0, 00:17:42.218 "data_size": 65536 00:17:42.218 } 00:17:42.218 ] 00:17:42.218 }' 00:17:42.218 11:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:42.218 11:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.785 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:43.044 [2024-07-13 11:29:17.691575] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.044 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.302 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.302 "name": "Existed_Raid", 00:17:43.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.302 "strip_size_kb": 64, 00:17:43.302 "state": "configuring", 00:17:43.302 "raid_level": "raid0", 00:17:43.302 "superblock": false, 00:17:43.302 "num_base_bdevs": 3, 00:17:43.302 "num_base_bdevs_discovered": 1, 00:17:43.302 "num_base_bdevs_operational": 3, 00:17:43.302 "base_bdevs_list": [ 00:17:43.302 { 00:17:43.302 "name": "BaseBdev1", 00:17:43.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.302 "is_configured": false, 00:17:43.302 "data_offset": 0, 00:17:43.302 "data_size": 0 00:17:43.302 }, 00:17:43.302 { 00:17:43.302 "name": null, 00:17:43.302 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:43.302 "is_configured": false, 00:17:43.302 "data_offset": 0, 00:17:43.302 "data_size": 65536 00:17:43.302 }, 00:17:43.303 { 00:17:43.303 "name": "BaseBdev3", 00:17:43.303 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:43.303 "is_configured": true, 00:17:43.303 "data_offset": 0, 00:17:43.303 "data_size": 65536 00:17:43.303 } 00:17:43.303 ] 00:17:43.303 }' 00:17:43.303 11:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.303 11:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.870 11:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.870 11:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:44.129 11:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:44.129 11:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:44.388 [2024-07-13 11:29:19.110486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.388 BaseBdev1 00:17:44.388 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:44.388 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:44.388 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:44.388 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:44.388 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:44.388 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:44.388 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.645 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:44.903 [ 00:17:44.903 { 00:17:44.903 "name": "BaseBdev1", 00:17:44.903 "aliases": [ 00:17:44.903 "28163a0a-e235-4642-aad5-565795d00ea5" 00:17:44.903 ], 00:17:44.903 "product_name": "Malloc disk", 00:17:44.903 "block_size": 512, 00:17:44.903 "num_blocks": 65536, 00:17:44.903 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:44.903 "assigned_rate_limits": { 00:17:44.903 "rw_ios_per_sec": 0, 00:17:44.903 "rw_mbytes_per_sec": 0, 00:17:44.903 "r_mbytes_per_sec": 0, 00:17:44.903 "w_mbytes_per_sec": 0 00:17:44.903 }, 00:17:44.903 "claimed": true, 00:17:44.903 "claim_type": "exclusive_write", 00:17:44.903 "zoned": false, 00:17:44.903 "supported_io_types": { 00:17:44.903 "read": true, 00:17:44.903 "write": true, 00:17:44.903 "unmap": true, 00:17:44.903 "flush": true, 00:17:44.903 "reset": true, 00:17:44.903 "nvme_admin": false, 00:17:44.903 "nvme_io": false, 00:17:44.903 "nvme_io_md": false, 00:17:44.903 "write_zeroes": true, 00:17:44.903 "zcopy": true, 00:17:44.903 "get_zone_info": false, 00:17:44.903 "zone_management": false, 00:17:44.903 "zone_append": false, 00:17:44.903 "compare": false, 00:17:44.903 "compare_and_write": false, 00:17:44.903 "abort": true, 00:17:44.903 "seek_hole": false, 00:17:44.903 "seek_data": false, 00:17:44.903 "copy": true, 00:17:44.903 "nvme_iov_md": false 00:17:44.903 }, 00:17:44.903 "memory_domains": [ 00:17:44.903 { 00:17:44.903 "dma_device_id": "system", 00:17:44.903 "dma_device_type": 1 00:17:44.903 }, 00:17:44.903 { 00:17:44.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.903 "dma_device_type": 2 00:17:44.903 } 00:17:44.903 ], 00:17:44.903 "driver_specific": {} 00:17:44.903 } 00:17:44.903 ] 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.903 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.161 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.161 "name": "Existed_Raid", 00:17:45.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.161 "strip_size_kb": 64, 00:17:45.161 "state": "configuring", 00:17:45.161 "raid_level": "raid0", 00:17:45.161 "superblock": false, 00:17:45.161 "num_base_bdevs": 3, 00:17:45.161 "num_base_bdevs_discovered": 2, 00:17:45.161 "num_base_bdevs_operational": 3, 00:17:45.161 "base_bdevs_list": [ 00:17:45.161 { 00:17:45.161 "name": "BaseBdev1", 00:17:45.161 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:45.161 "is_configured": true, 00:17:45.161 "data_offset": 0, 00:17:45.161 "data_size": 65536 00:17:45.161 }, 00:17:45.161 { 00:17:45.161 "name": null, 00:17:45.161 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:45.161 "is_configured": false, 00:17:45.161 "data_offset": 0, 00:17:45.161 "data_size": 65536 00:17:45.161 }, 00:17:45.161 { 00:17:45.161 "name": "BaseBdev3", 00:17:45.161 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:45.161 "is_configured": true, 00:17:45.161 "data_offset": 0, 00:17:45.161 "data_size": 65536 00:17:45.161 } 00:17:45.161 ] 00:17:45.161 }' 00:17:45.161 11:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.161 11:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.727 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.727 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:45.986 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:45.986 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:46.245 [2024-07-13 11:29:20.826877] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.245 11:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.504 11:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.504 "name": "Existed_Raid", 00:17:46.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.504 "strip_size_kb": 64, 00:17:46.504 "state": "configuring", 00:17:46.504 "raid_level": "raid0", 00:17:46.504 "superblock": false, 00:17:46.504 "num_base_bdevs": 3, 00:17:46.504 "num_base_bdevs_discovered": 1, 00:17:46.504 "num_base_bdevs_operational": 3, 00:17:46.504 "base_bdevs_list": [ 00:17:46.504 { 00:17:46.504 "name": "BaseBdev1", 00:17:46.504 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:46.504 "is_configured": true, 00:17:46.504 "data_offset": 0, 00:17:46.504 "data_size": 65536 00:17:46.504 }, 00:17:46.504 { 00:17:46.504 "name": null, 00:17:46.504 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:46.504 "is_configured": false, 00:17:46.504 "data_offset": 0, 00:17:46.504 "data_size": 65536 00:17:46.504 }, 00:17:46.504 { 00:17:46.504 "name": null, 00:17:46.504 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:46.504 "is_configured": false, 00:17:46.504 "data_offset": 0, 00:17:46.504 "data_size": 65536 00:17:46.504 } 00:17:46.504 ] 00:17:46.504 }' 00:17:46.504 11:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.504 11:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.070 11:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.070 11:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:47.328 11:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:47.328 11:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:47.328 [2024-07-13 11:29:22.071217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.587 "name": "Existed_Raid", 00:17:47.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.587 "strip_size_kb": 64, 00:17:47.587 "state": "configuring", 00:17:47.587 "raid_level": "raid0", 00:17:47.587 "superblock": false, 00:17:47.587 "num_base_bdevs": 3, 00:17:47.587 "num_base_bdevs_discovered": 2, 00:17:47.587 "num_base_bdevs_operational": 3, 00:17:47.587 "base_bdevs_list": [ 00:17:47.587 { 00:17:47.587 "name": "BaseBdev1", 00:17:47.587 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:47.587 "is_configured": true, 00:17:47.587 "data_offset": 0, 00:17:47.587 "data_size": 65536 00:17:47.587 }, 00:17:47.587 { 00:17:47.587 "name": null, 00:17:47.587 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:47.587 "is_configured": false, 00:17:47.587 "data_offset": 0, 00:17:47.587 "data_size": 65536 00:17:47.587 }, 00:17:47.587 { 00:17:47.587 "name": "BaseBdev3", 00:17:47.587 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:47.587 "is_configured": true, 00:17:47.587 "data_offset": 0, 00:17:47.587 "data_size": 65536 00:17:47.587 } 00:17:47.587 ] 00:17:47.587 }' 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.587 11:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:48.522 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.522 11:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:48.522 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:48.522 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.779 [2024-07-13 11:29:23.375672] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.779 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.036 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.036 "name": "Existed_Raid", 00:17:49.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.036 "strip_size_kb": 64, 00:17:49.036 "state": "configuring", 00:17:49.036 "raid_level": "raid0", 00:17:49.036 "superblock": false, 00:17:49.036 "num_base_bdevs": 3, 00:17:49.036 "num_base_bdevs_discovered": 1, 00:17:49.036 "num_base_bdevs_operational": 3, 00:17:49.036 "base_bdevs_list": [ 00:17:49.036 { 00:17:49.036 "name": null, 00:17:49.036 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:49.036 "is_configured": false, 00:17:49.036 "data_offset": 0, 00:17:49.036 "data_size": 65536 00:17:49.036 }, 00:17:49.036 { 00:17:49.036 "name": null, 00:17:49.036 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:49.036 "is_configured": false, 00:17:49.036 "data_offset": 0, 00:17:49.036 "data_size": 65536 00:17:49.036 }, 00:17:49.036 { 00:17:49.036 "name": "BaseBdev3", 00:17:49.036 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:49.036 "is_configured": true, 00:17:49.036 "data_offset": 0, 00:17:49.036 "data_size": 65536 00:17:49.036 } 00:17:49.036 ] 00:17:49.036 }' 00:17:49.036 11:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.036 11:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.968 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.968 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:49.968 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:49.968 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:50.225 [2024-07-13 11:29:24.790260] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.225 11:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.483 11:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.483 "name": "Existed_Raid", 00:17:50.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.483 "strip_size_kb": 64, 00:17:50.483 "state": "configuring", 00:17:50.483 "raid_level": "raid0", 00:17:50.483 "superblock": false, 00:17:50.483 "num_base_bdevs": 3, 00:17:50.483 "num_base_bdevs_discovered": 2, 00:17:50.483 "num_base_bdevs_operational": 3, 00:17:50.483 "base_bdevs_list": [ 00:17:50.483 { 00:17:50.483 "name": null, 00:17:50.483 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:50.483 "is_configured": false, 00:17:50.483 "data_offset": 0, 00:17:50.483 "data_size": 65536 00:17:50.483 }, 00:17:50.483 { 00:17:50.483 "name": "BaseBdev2", 00:17:50.483 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:50.483 "is_configured": true, 00:17:50.483 "data_offset": 0, 00:17:50.483 "data_size": 65536 00:17:50.483 }, 00:17:50.483 { 00:17:50.483 "name": "BaseBdev3", 00:17:50.483 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:50.483 "is_configured": true, 00:17:50.483 "data_offset": 0, 00:17:50.483 "data_size": 65536 00:17:50.483 } 00:17:50.483 ] 00:17:50.483 }' 00:17:50.483 11:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.483 11:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.049 11:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.049 11:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:51.307 11:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:51.307 11:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.307 11:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:51.565 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 28163a0a-e235-4642-aad5-565795d00ea5 00:17:51.823 [2024-07-13 11:29:26.326455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:51.823 [2024-07-13 11:29:26.326499] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:17:51.823 [2024-07-13 11:29:26.326509] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:51.823 [2024-07-13 11:29:26.326617] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:51.823 [2024-07-13 11:29:26.327041] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:17:51.823 [2024-07-13 11:29:26.327069] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:17:51.823 [2024-07-13 11:29:26.327372] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.823 NewBaseBdev 00:17:51.823 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:51.823 11:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:51.823 11:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:51.823 11:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:17:51.823 11:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:51.823 11:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:51.823 11:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:51.823 11:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:52.081 [ 00:17:52.081 { 00:17:52.081 "name": "NewBaseBdev", 00:17:52.081 "aliases": [ 00:17:52.081 "28163a0a-e235-4642-aad5-565795d00ea5" 00:17:52.081 ], 00:17:52.081 "product_name": "Malloc disk", 00:17:52.081 "block_size": 512, 00:17:52.081 "num_blocks": 65536, 00:17:52.081 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:52.081 "assigned_rate_limits": { 00:17:52.081 "rw_ios_per_sec": 0, 00:17:52.081 "rw_mbytes_per_sec": 0, 00:17:52.081 "r_mbytes_per_sec": 0, 00:17:52.081 "w_mbytes_per_sec": 0 00:17:52.081 }, 00:17:52.081 "claimed": true, 00:17:52.081 "claim_type": "exclusive_write", 00:17:52.081 "zoned": false, 00:17:52.081 "supported_io_types": { 00:17:52.081 "read": true, 00:17:52.081 "write": true, 00:17:52.081 "unmap": true, 00:17:52.081 "flush": true, 00:17:52.081 "reset": true, 00:17:52.081 "nvme_admin": false, 00:17:52.081 "nvme_io": false, 00:17:52.081 "nvme_io_md": false, 00:17:52.081 "write_zeroes": true, 00:17:52.081 "zcopy": true, 00:17:52.081 "get_zone_info": false, 00:17:52.081 "zone_management": false, 00:17:52.081 "zone_append": false, 00:17:52.081 "compare": false, 00:17:52.081 "compare_and_write": false, 00:17:52.081 "abort": true, 00:17:52.081 "seek_hole": false, 00:17:52.081 "seek_data": false, 00:17:52.081 "copy": true, 00:17:52.081 "nvme_iov_md": false 00:17:52.081 }, 00:17:52.081 "memory_domains": [ 00:17:52.081 { 00:17:52.081 "dma_device_id": "system", 00:17:52.081 "dma_device_type": 1 00:17:52.081 }, 00:17:52.081 { 00:17:52.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.081 "dma_device_type": 2 00:17:52.081 } 00:17:52.081 ], 00:17:52.081 "driver_specific": {} 00:17:52.081 } 00:17:52.081 ] 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.081 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.339 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.339 "name": "Existed_Raid", 00:17:52.339 "uuid": "6bf86e9a-1cfb-41bd-8486-e3aa404bc8db", 00:17:52.339 "strip_size_kb": 64, 00:17:52.339 "state": "online", 00:17:52.339 "raid_level": "raid0", 00:17:52.339 "superblock": false, 00:17:52.339 "num_base_bdevs": 3, 00:17:52.339 "num_base_bdevs_discovered": 3, 00:17:52.339 "num_base_bdevs_operational": 3, 00:17:52.339 "base_bdevs_list": [ 00:17:52.339 { 00:17:52.339 "name": "NewBaseBdev", 00:17:52.339 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:52.339 "is_configured": true, 00:17:52.339 "data_offset": 0, 00:17:52.339 "data_size": 65536 00:17:52.339 }, 00:17:52.339 { 00:17:52.339 "name": "BaseBdev2", 00:17:52.339 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:52.339 "is_configured": true, 00:17:52.339 "data_offset": 0, 00:17:52.339 "data_size": 65536 00:17:52.339 }, 00:17:52.339 { 00:17:52.339 "name": "BaseBdev3", 00:17:52.339 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:52.339 "is_configured": true, 00:17:52.339 "data_offset": 0, 00:17:52.339 "data_size": 65536 00:17:52.339 } 00:17:52.339 ] 00:17:52.339 }' 00:17:52.339 11:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.339 11:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:52.904 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:52.904 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:52.904 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:52.904 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:52.904 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:52.904 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:52.904 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:52.904 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:53.162 [2024-07-13 11:29:27.827097] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.163 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:53.163 "name": "Existed_Raid", 00:17:53.163 "aliases": [ 00:17:53.163 "6bf86e9a-1cfb-41bd-8486-e3aa404bc8db" 00:17:53.163 ], 00:17:53.163 "product_name": "Raid Volume", 00:17:53.163 "block_size": 512, 00:17:53.163 "num_blocks": 196608, 00:17:53.163 "uuid": "6bf86e9a-1cfb-41bd-8486-e3aa404bc8db", 00:17:53.163 "assigned_rate_limits": { 00:17:53.163 "rw_ios_per_sec": 0, 00:17:53.163 "rw_mbytes_per_sec": 0, 00:17:53.163 "r_mbytes_per_sec": 0, 00:17:53.163 "w_mbytes_per_sec": 0 00:17:53.163 }, 00:17:53.163 "claimed": false, 00:17:53.163 "zoned": false, 00:17:53.163 "supported_io_types": { 00:17:53.163 "read": true, 00:17:53.163 "write": true, 00:17:53.163 "unmap": true, 00:17:53.163 "flush": true, 00:17:53.163 "reset": true, 00:17:53.163 "nvme_admin": false, 00:17:53.163 "nvme_io": false, 00:17:53.163 "nvme_io_md": false, 00:17:53.163 "write_zeroes": true, 00:17:53.163 "zcopy": false, 00:17:53.163 "get_zone_info": false, 00:17:53.163 "zone_management": false, 00:17:53.163 "zone_append": false, 00:17:53.163 "compare": false, 00:17:53.163 "compare_and_write": false, 00:17:53.163 "abort": false, 00:17:53.163 "seek_hole": false, 00:17:53.163 "seek_data": false, 00:17:53.163 "copy": false, 00:17:53.163 "nvme_iov_md": false 00:17:53.163 }, 00:17:53.163 "memory_domains": [ 00:17:53.163 { 00:17:53.163 "dma_device_id": "system", 00:17:53.163 "dma_device_type": 1 00:17:53.163 }, 00:17:53.163 { 00:17:53.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.163 "dma_device_type": 2 00:17:53.163 }, 00:17:53.163 { 00:17:53.163 "dma_device_id": "system", 00:17:53.163 "dma_device_type": 1 00:17:53.163 }, 00:17:53.163 { 00:17:53.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.163 "dma_device_type": 2 00:17:53.163 }, 00:17:53.163 { 00:17:53.163 "dma_device_id": "system", 00:17:53.163 "dma_device_type": 1 00:17:53.163 }, 00:17:53.163 { 00:17:53.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.163 "dma_device_type": 2 00:17:53.163 } 00:17:53.163 ], 00:17:53.163 "driver_specific": { 00:17:53.163 "raid": { 00:17:53.163 "uuid": "6bf86e9a-1cfb-41bd-8486-e3aa404bc8db", 00:17:53.163 "strip_size_kb": 64, 00:17:53.163 "state": "online", 00:17:53.163 "raid_level": "raid0", 00:17:53.163 "superblock": false, 00:17:53.163 "num_base_bdevs": 3, 00:17:53.163 "num_base_bdevs_discovered": 3, 00:17:53.163 "num_base_bdevs_operational": 3, 00:17:53.163 "base_bdevs_list": [ 00:17:53.163 { 00:17:53.163 "name": "NewBaseBdev", 00:17:53.163 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:53.163 "is_configured": true, 00:17:53.163 "data_offset": 0, 00:17:53.163 "data_size": 65536 00:17:53.163 }, 00:17:53.163 { 00:17:53.163 "name": "BaseBdev2", 00:17:53.163 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:53.163 "is_configured": true, 00:17:53.163 "data_offset": 0, 00:17:53.163 "data_size": 65536 00:17:53.163 }, 00:17:53.163 { 00:17:53.163 "name": "BaseBdev3", 00:17:53.163 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:53.163 "is_configured": true, 00:17:53.163 "data_offset": 0, 00:17:53.163 "data_size": 65536 00:17:53.163 } 00:17:53.163 ] 00:17:53.163 } 00:17:53.163 } 00:17:53.163 }' 00:17:53.163 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:53.163 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:53.163 BaseBdev2 00:17:53.163 BaseBdev3' 00:17:53.163 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:53.163 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:53.163 11:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:53.421 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:53.421 "name": "NewBaseBdev", 00:17:53.421 "aliases": [ 00:17:53.421 "28163a0a-e235-4642-aad5-565795d00ea5" 00:17:53.421 ], 00:17:53.421 "product_name": "Malloc disk", 00:17:53.421 "block_size": 512, 00:17:53.421 "num_blocks": 65536, 00:17:53.421 "uuid": "28163a0a-e235-4642-aad5-565795d00ea5", 00:17:53.421 "assigned_rate_limits": { 00:17:53.421 "rw_ios_per_sec": 0, 00:17:53.421 "rw_mbytes_per_sec": 0, 00:17:53.421 "r_mbytes_per_sec": 0, 00:17:53.421 "w_mbytes_per_sec": 0 00:17:53.421 }, 00:17:53.421 "claimed": true, 00:17:53.421 "claim_type": "exclusive_write", 00:17:53.421 "zoned": false, 00:17:53.421 "supported_io_types": { 00:17:53.421 "read": true, 00:17:53.421 "write": true, 00:17:53.421 "unmap": true, 00:17:53.421 "flush": true, 00:17:53.421 "reset": true, 00:17:53.421 "nvme_admin": false, 00:17:53.421 "nvme_io": false, 00:17:53.421 "nvme_io_md": false, 00:17:53.421 "write_zeroes": true, 00:17:53.421 "zcopy": true, 00:17:53.421 "get_zone_info": false, 00:17:53.421 "zone_management": false, 00:17:53.421 "zone_append": false, 00:17:53.421 "compare": false, 00:17:53.421 "compare_and_write": false, 00:17:53.421 "abort": true, 00:17:53.421 "seek_hole": false, 00:17:53.421 "seek_data": false, 00:17:53.421 "copy": true, 00:17:53.421 "nvme_iov_md": false 00:17:53.421 }, 00:17:53.421 "memory_domains": [ 00:17:53.421 { 00:17:53.421 "dma_device_id": "system", 00:17:53.421 "dma_device_type": 1 00:17:53.421 }, 00:17:53.421 { 00:17:53.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.421 "dma_device_type": 2 00:17:53.421 } 00:17:53.421 ], 00:17:53.421 "driver_specific": {} 00:17:53.421 }' 00:17:53.421 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.421 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:53.679 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:53.679 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.679 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:53.679 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:53.679 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.679 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:53.937 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:53.937 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.937 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:53.937 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:53.937 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:53.937 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:53.937 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.194 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.194 "name": "BaseBdev2", 00:17:54.194 "aliases": [ 00:17:54.194 "b6a4061f-4bf5-4929-a14a-eafa39b9a848" 00:17:54.194 ], 00:17:54.194 "product_name": "Malloc disk", 00:17:54.194 "block_size": 512, 00:17:54.194 "num_blocks": 65536, 00:17:54.194 "uuid": "b6a4061f-4bf5-4929-a14a-eafa39b9a848", 00:17:54.194 "assigned_rate_limits": { 00:17:54.194 "rw_ios_per_sec": 0, 00:17:54.194 "rw_mbytes_per_sec": 0, 00:17:54.194 "r_mbytes_per_sec": 0, 00:17:54.194 "w_mbytes_per_sec": 0 00:17:54.194 }, 00:17:54.194 "claimed": true, 00:17:54.194 "claim_type": "exclusive_write", 00:17:54.194 "zoned": false, 00:17:54.194 "supported_io_types": { 00:17:54.194 "read": true, 00:17:54.194 "write": true, 00:17:54.194 "unmap": true, 00:17:54.194 "flush": true, 00:17:54.194 "reset": true, 00:17:54.194 "nvme_admin": false, 00:17:54.194 "nvme_io": false, 00:17:54.194 "nvme_io_md": false, 00:17:54.194 "write_zeroes": true, 00:17:54.194 "zcopy": true, 00:17:54.194 "get_zone_info": false, 00:17:54.194 "zone_management": false, 00:17:54.195 "zone_append": false, 00:17:54.195 "compare": false, 00:17:54.195 "compare_and_write": false, 00:17:54.195 "abort": true, 00:17:54.195 "seek_hole": false, 00:17:54.195 "seek_data": false, 00:17:54.195 "copy": true, 00:17:54.195 "nvme_iov_md": false 00:17:54.195 }, 00:17:54.195 "memory_domains": [ 00:17:54.195 { 00:17:54.195 "dma_device_id": "system", 00:17:54.195 "dma_device_type": 1 00:17:54.195 }, 00:17:54.195 { 00:17:54.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.195 "dma_device_type": 2 00:17:54.195 } 00:17:54.195 ], 00:17:54.195 "driver_specific": {} 00:17:54.195 }' 00:17:54.195 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.195 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.195 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:54.195 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.452 11:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.453 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:54.453 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.453 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.453 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:54.453 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.453 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.710 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:54.710 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:54.710 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:54.710 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.968 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.968 "name": "BaseBdev3", 00:17:54.968 "aliases": [ 00:17:54.968 "432a3e71-4b1e-4213-b961-e345c11089d8" 00:17:54.968 ], 00:17:54.968 "product_name": "Malloc disk", 00:17:54.968 "block_size": 512, 00:17:54.968 "num_blocks": 65536, 00:17:54.968 "uuid": "432a3e71-4b1e-4213-b961-e345c11089d8", 00:17:54.968 "assigned_rate_limits": { 00:17:54.968 "rw_ios_per_sec": 0, 00:17:54.968 "rw_mbytes_per_sec": 0, 00:17:54.968 "r_mbytes_per_sec": 0, 00:17:54.968 "w_mbytes_per_sec": 0 00:17:54.968 }, 00:17:54.968 "claimed": true, 00:17:54.968 "claim_type": "exclusive_write", 00:17:54.968 "zoned": false, 00:17:54.968 "supported_io_types": { 00:17:54.968 "read": true, 00:17:54.968 "write": true, 00:17:54.968 "unmap": true, 00:17:54.968 "flush": true, 00:17:54.968 "reset": true, 00:17:54.968 "nvme_admin": false, 00:17:54.968 "nvme_io": false, 00:17:54.968 "nvme_io_md": false, 00:17:54.968 "write_zeroes": true, 00:17:54.968 "zcopy": true, 00:17:54.968 "get_zone_info": false, 00:17:54.968 "zone_management": false, 00:17:54.968 "zone_append": false, 00:17:54.968 "compare": false, 00:17:54.968 "compare_and_write": false, 00:17:54.968 "abort": true, 00:17:54.968 "seek_hole": false, 00:17:54.968 "seek_data": false, 00:17:54.968 "copy": true, 00:17:54.968 "nvme_iov_md": false 00:17:54.968 }, 00:17:54.968 "memory_domains": [ 00:17:54.968 { 00:17:54.968 "dma_device_id": "system", 00:17:54.968 "dma_device_type": 1 00:17:54.968 }, 00:17:54.968 { 00:17:54.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.968 "dma_device_type": 2 00:17:54.968 } 00:17:54.968 ], 00:17:54.968 "driver_specific": {} 00:17:54.968 }' 00:17:54.968 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.968 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.968 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:54.968 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.968 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:55.226 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:55.226 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.226 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:55.226 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:55.226 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.226 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:55.226 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:55.226 11:29:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:55.484 [2024-07-13 11:29:30.191472] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:55.484 [2024-07-13 11:29:30.191502] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.484 [2024-07-13 11:29:30.191577] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.484 [2024-07-13 11:29:30.191643] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.484 [2024-07-13 11:29:30.191654] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 125213 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 125213 ']' 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 125213 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125213 00:17:55.484 killing process with pid 125213 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125213' 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 125213 00:17:55.484 11:29:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 125213 00:17:55.484 [2024-07-13 11:29:30.225174] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:55.742 [2024-07-13 11:29:30.425332] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:57.116 ************************************ 00:17:57.116 END TEST raid_state_function_test 00:17:57.116 ************************************ 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:57.116 00:17:57.116 real 0m29.600s 00:17:57.116 user 0m55.673s 00:17:57.116 sys 0m3.203s 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.116 11:29:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:57.116 11:29:31 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:57.116 11:29:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:57.116 11:29:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:57.116 11:29:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:57.116 ************************************ 00:17:57.116 START TEST raid_state_function_test_sb 00:17:57.116 ************************************ 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=126250 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 126250' 00:17:57.116 Process raid pid: 126250 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 126250 /var/tmp/spdk-raid.sock 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 126250 ']' 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:57.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:57.116 11:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.117 [2024-07-13 11:29:31.569517] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:57.117 [2024-07-13 11:29:31.569724] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.117 [2024-07-13 11:29:31.745691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.375 [2024-07-13 11:29:31.948003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.375 [2024-07-13 11:29:32.115003] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:57.941 [2024-07-13 11:29:32.676075] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:57.941 [2024-07-13 11:29:32.676155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:57.941 [2024-07-13 11:29:32.676171] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:57.941 [2024-07-13 11:29:32.676200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:57.941 [2024-07-13 11:29:32.676210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:57.941 [2024-07-13 11:29:32.676227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:57.941 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.942 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.942 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.942 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.200 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.200 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.200 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.200 "name": "Existed_Raid", 00:17:58.200 "uuid": "5a0fa1e7-3a1d-4ff7-a606-b7a298b44ad5", 00:17:58.200 "strip_size_kb": 64, 00:17:58.200 "state": "configuring", 00:17:58.200 "raid_level": "raid0", 00:17:58.200 "superblock": true, 00:17:58.200 "num_base_bdevs": 3, 00:17:58.200 "num_base_bdevs_discovered": 0, 00:17:58.200 "num_base_bdevs_operational": 3, 00:17:58.200 "base_bdevs_list": [ 00:17:58.200 { 00:17:58.200 "name": "BaseBdev1", 00:17:58.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.200 "is_configured": false, 00:17:58.200 "data_offset": 0, 00:17:58.200 "data_size": 0 00:17:58.200 }, 00:17:58.200 { 00:17:58.201 "name": "BaseBdev2", 00:17:58.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.201 "is_configured": false, 00:17:58.201 "data_offset": 0, 00:17:58.201 "data_size": 0 00:17:58.201 }, 00:17:58.201 { 00:17:58.201 "name": "BaseBdev3", 00:17:58.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.201 "is_configured": false, 00:17:58.201 "data_offset": 0, 00:17:58.201 "data_size": 0 00:17:58.201 } 00:17:58.201 ] 00:17:58.201 }' 00:17:58.201 11:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.201 11:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.133 11:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:59.133 [2024-07-13 11:29:33.856200] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:59.133 [2024-07-13 11:29:33.856243] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:59.133 11:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:59.393 [2024-07-13 11:29:34.080248] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:59.393 [2024-07-13 11:29:34.080306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:59.393 [2024-07-13 11:29:34.080320] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:59.393 [2024-07-13 11:29:34.080339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:59.393 [2024-07-13 11:29:34.080347] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:59.393 [2024-07-13 11:29:34.080371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:59.393 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:59.667 [2024-07-13 11:29:34.369465] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:59.667 BaseBdev1 00:17:59.667 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:59.667 11:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:59.667 11:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:59.667 11:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:59.667 11:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:59.667 11:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:59.667 11:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.963 11:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:00.222 [ 00:18:00.222 { 00:18:00.222 "name": "BaseBdev1", 00:18:00.222 "aliases": [ 00:18:00.222 "78f57c7c-5f2b-4411-ac61-4dada197ac2a" 00:18:00.222 ], 00:18:00.222 "product_name": "Malloc disk", 00:18:00.222 "block_size": 512, 00:18:00.222 "num_blocks": 65536, 00:18:00.222 "uuid": "78f57c7c-5f2b-4411-ac61-4dada197ac2a", 00:18:00.222 "assigned_rate_limits": { 00:18:00.222 "rw_ios_per_sec": 0, 00:18:00.222 "rw_mbytes_per_sec": 0, 00:18:00.222 "r_mbytes_per_sec": 0, 00:18:00.222 "w_mbytes_per_sec": 0 00:18:00.222 }, 00:18:00.222 "claimed": true, 00:18:00.222 "claim_type": "exclusive_write", 00:18:00.222 "zoned": false, 00:18:00.222 "supported_io_types": { 00:18:00.222 "read": true, 00:18:00.222 "write": true, 00:18:00.222 "unmap": true, 00:18:00.222 "flush": true, 00:18:00.222 "reset": true, 00:18:00.222 "nvme_admin": false, 00:18:00.222 "nvme_io": false, 00:18:00.222 "nvme_io_md": false, 00:18:00.222 "write_zeroes": true, 00:18:00.222 "zcopy": true, 00:18:00.222 "get_zone_info": false, 00:18:00.222 "zone_management": false, 00:18:00.222 "zone_append": false, 00:18:00.222 "compare": false, 00:18:00.222 "compare_and_write": false, 00:18:00.222 "abort": true, 00:18:00.222 "seek_hole": false, 00:18:00.222 "seek_data": false, 00:18:00.222 "copy": true, 00:18:00.222 "nvme_iov_md": false 00:18:00.222 }, 00:18:00.222 "memory_domains": [ 00:18:00.222 { 00:18:00.222 "dma_device_id": "system", 00:18:00.222 "dma_device_type": 1 00:18:00.222 }, 00:18:00.222 { 00:18:00.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.222 "dma_device_type": 2 00:18:00.222 } 00:18:00.222 ], 00:18:00.222 "driver_specific": {} 00:18:00.222 } 00:18:00.222 ] 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.222 11:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.480 11:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.480 "name": "Existed_Raid", 00:18:00.480 "uuid": "1207525d-9528-4847-8cbb-95553faf9176", 00:18:00.480 "strip_size_kb": 64, 00:18:00.480 "state": "configuring", 00:18:00.480 "raid_level": "raid0", 00:18:00.480 "superblock": true, 00:18:00.481 "num_base_bdevs": 3, 00:18:00.481 "num_base_bdevs_discovered": 1, 00:18:00.481 "num_base_bdevs_operational": 3, 00:18:00.481 "base_bdevs_list": [ 00:18:00.481 { 00:18:00.481 "name": "BaseBdev1", 00:18:00.481 "uuid": "78f57c7c-5f2b-4411-ac61-4dada197ac2a", 00:18:00.481 "is_configured": true, 00:18:00.481 "data_offset": 2048, 00:18:00.481 "data_size": 63488 00:18:00.481 }, 00:18:00.481 { 00:18:00.481 "name": "BaseBdev2", 00:18:00.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.481 "is_configured": false, 00:18:00.481 "data_offset": 0, 00:18:00.481 "data_size": 0 00:18:00.481 }, 00:18:00.481 { 00:18:00.481 "name": "BaseBdev3", 00:18:00.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.481 "is_configured": false, 00:18:00.481 "data_offset": 0, 00:18:00.481 "data_size": 0 00:18:00.481 } 00:18:00.481 ] 00:18:00.481 }' 00:18:00.481 11:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.481 11:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.067 11:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:01.329 [2024-07-13 11:29:35.961802] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:01.329 [2024-07-13 11:29:35.961849] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:18:01.329 11:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:01.587 [2024-07-13 11:29:36.229910] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.587 [2024-07-13 11:29:36.231889] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:01.587 [2024-07-13 11:29:36.231962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:01.587 [2024-07-13 11:29:36.231977] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:01.587 [2024-07-13 11:29:36.232018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.587 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:01.845 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:01.845 "name": "Existed_Raid", 00:18:01.845 "uuid": "f8d32e4d-40b7-43d0-bc76-0d0df0462412", 00:18:01.845 "strip_size_kb": 64, 00:18:01.845 "state": "configuring", 00:18:01.845 "raid_level": "raid0", 00:18:01.845 "superblock": true, 00:18:01.845 "num_base_bdevs": 3, 00:18:01.845 "num_base_bdevs_discovered": 1, 00:18:01.845 "num_base_bdevs_operational": 3, 00:18:01.845 "base_bdevs_list": [ 00:18:01.845 { 00:18:01.845 "name": "BaseBdev1", 00:18:01.845 "uuid": "78f57c7c-5f2b-4411-ac61-4dada197ac2a", 00:18:01.845 "is_configured": true, 00:18:01.845 "data_offset": 2048, 00:18:01.845 "data_size": 63488 00:18:01.845 }, 00:18:01.845 { 00:18:01.845 "name": "BaseBdev2", 00:18:01.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.845 "is_configured": false, 00:18:01.845 "data_offset": 0, 00:18:01.845 "data_size": 0 00:18:01.845 }, 00:18:01.845 { 00:18:01.845 "name": "BaseBdev3", 00:18:01.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.845 "is_configured": false, 00:18:01.845 "data_offset": 0, 00:18:01.845 "data_size": 0 00:18:01.845 } 00:18:01.845 ] 00:18:01.845 }' 00:18:01.845 11:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:01.845 11:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.412 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:02.670 [2024-07-13 11:29:37.357955] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:02.670 BaseBdev2 00:18:02.670 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:02.670 11:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:02.670 11:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:02.670 11:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:02.671 11:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:02.671 11:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:02.671 11:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:02.929 11:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:03.188 [ 00:18:03.188 { 00:18:03.188 "name": "BaseBdev2", 00:18:03.188 "aliases": [ 00:18:03.188 "8f312e77-673e-4745-bb10-907eab7ee1a7" 00:18:03.188 ], 00:18:03.188 "product_name": "Malloc disk", 00:18:03.188 "block_size": 512, 00:18:03.188 "num_blocks": 65536, 00:18:03.188 "uuid": "8f312e77-673e-4745-bb10-907eab7ee1a7", 00:18:03.188 "assigned_rate_limits": { 00:18:03.188 "rw_ios_per_sec": 0, 00:18:03.188 "rw_mbytes_per_sec": 0, 00:18:03.188 "r_mbytes_per_sec": 0, 00:18:03.188 "w_mbytes_per_sec": 0 00:18:03.188 }, 00:18:03.188 "claimed": true, 00:18:03.188 "claim_type": "exclusive_write", 00:18:03.188 "zoned": false, 00:18:03.188 "supported_io_types": { 00:18:03.188 "read": true, 00:18:03.188 "write": true, 00:18:03.188 "unmap": true, 00:18:03.188 "flush": true, 00:18:03.188 "reset": true, 00:18:03.188 "nvme_admin": false, 00:18:03.188 "nvme_io": false, 00:18:03.188 "nvme_io_md": false, 00:18:03.188 "write_zeroes": true, 00:18:03.188 "zcopy": true, 00:18:03.188 "get_zone_info": false, 00:18:03.188 "zone_management": false, 00:18:03.188 "zone_append": false, 00:18:03.188 "compare": false, 00:18:03.188 "compare_and_write": false, 00:18:03.188 "abort": true, 00:18:03.188 "seek_hole": false, 00:18:03.188 "seek_data": false, 00:18:03.188 "copy": true, 00:18:03.188 "nvme_iov_md": false 00:18:03.188 }, 00:18:03.188 "memory_domains": [ 00:18:03.188 { 00:18:03.188 "dma_device_id": "system", 00:18:03.188 "dma_device_type": 1 00:18:03.188 }, 00:18:03.188 { 00:18:03.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:03.188 "dma_device_type": 2 00:18:03.188 } 00:18:03.188 ], 00:18:03.188 "driver_specific": {} 00:18:03.188 } 00:18:03.188 ] 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.188 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.446 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.446 "name": "Existed_Raid", 00:18:03.446 "uuid": "f8d32e4d-40b7-43d0-bc76-0d0df0462412", 00:18:03.446 "strip_size_kb": 64, 00:18:03.446 "state": "configuring", 00:18:03.446 "raid_level": "raid0", 00:18:03.446 "superblock": true, 00:18:03.446 "num_base_bdevs": 3, 00:18:03.446 "num_base_bdevs_discovered": 2, 00:18:03.446 "num_base_bdevs_operational": 3, 00:18:03.446 "base_bdevs_list": [ 00:18:03.446 { 00:18:03.446 "name": "BaseBdev1", 00:18:03.446 "uuid": "78f57c7c-5f2b-4411-ac61-4dada197ac2a", 00:18:03.446 "is_configured": true, 00:18:03.446 "data_offset": 2048, 00:18:03.446 "data_size": 63488 00:18:03.446 }, 00:18:03.446 { 00:18:03.446 "name": "BaseBdev2", 00:18:03.446 "uuid": "8f312e77-673e-4745-bb10-907eab7ee1a7", 00:18:03.446 "is_configured": true, 00:18:03.446 "data_offset": 2048, 00:18:03.446 "data_size": 63488 00:18:03.446 }, 00:18:03.446 { 00:18:03.446 "name": "BaseBdev3", 00:18:03.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.446 "is_configured": false, 00:18:03.446 "data_offset": 0, 00:18:03.446 "data_size": 0 00:18:03.446 } 00:18:03.446 ] 00:18:03.446 }' 00:18:03.446 11:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.446 11:29:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.013 11:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:04.271 [2024-07-13 11:29:38.921749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:04.271 [2024-07-13 11:29:38.922031] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:18:04.271 [2024-07-13 11:29:38.922049] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:04.271 [2024-07-13 11:29:38.922225] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:18:04.271 BaseBdev3 00:18:04.271 [2024-07-13 11:29:38.922592] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:18:04.271 [2024-07-13 11:29:38.922620] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:18:04.271 [2024-07-13 11:29:38.922799] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.271 11:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:04.271 11:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:04.271 11:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:04.271 11:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:04.271 11:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:04.271 11:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:04.271 11:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:04.530 11:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:04.788 [ 00:18:04.788 { 00:18:04.788 "name": "BaseBdev3", 00:18:04.788 "aliases": [ 00:18:04.788 "45897383-7c7e-4469-a1a1-ba6e1c65fcee" 00:18:04.788 ], 00:18:04.788 "product_name": "Malloc disk", 00:18:04.788 "block_size": 512, 00:18:04.788 "num_blocks": 65536, 00:18:04.788 "uuid": "45897383-7c7e-4469-a1a1-ba6e1c65fcee", 00:18:04.788 "assigned_rate_limits": { 00:18:04.788 "rw_ios_per_sec": 0, 00:18:04.788 "rw_mbytes_per_sec": 0, 00:18:04.788 "r_mbytes_per_sec": 0, 00:18:04.788 "w_mbytes_per_sec": 0 00:18:04.788 }, 00:18:04.788 "claimed": true, 00:18:04.788 "claim_type": "exclusive_write", 00:18:04.788 "zoned": false, 00:18:04.788 "supported_io_types": { 00:18:04.788 "read": true, 00:18:04.789 "write": true, 00:18:04.789 "unmap": true, 00:18:04.789 "flush": true, 00:18:04.789 "reset": true, 00:18:04.789 "nvme_admin": false, 00:18:04.789 "nvme_io": false, 00:18:04.789 "nvme_io_md": false, 00:18:04.789 "write_zeroes": true, 00:18:04.789 "zcopy": true, 00:18:04.789 "get_zone_info": false, 00:18:04.789 "zone_management": false, 00:18:04.789 "zone_append": false, 00:18:04.789 "compare": false, 00:18:04.789 "compare_and_write": false, 00:18:04.789 "abort": true, 00:18:04.789 "seek_hole": false, 00:18:04.789 "seek_data": false, 00:18:04.789 "copy": true, 00:18:04.789 "nvme_iov_md": false 00:18:04.789 }, 00:18:04.789 "memory_domains": [ 00:18:04.789 { 00:18:04.789 "dma_device_id": "system", 00:18:04.789 "dma_device_type": 1 00:18:04.789 }, 00:18:04.789 { 00:18:04.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:04.789 "dma_device_type": 2 00:18:04.789 } 00:18:04.789 ], 00:18:04.789 "driver_specific": {} 00:18:04.789 } 00:18:04.789 ] 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.789 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.047 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.047 "name": "Existed_Raid", 00:18:05.047 "uuid": "f8d32e4d-40b7-43d0-bc76-0d0df0462412", 00:18:05.047 "strip_size_kb": 64, 00:18:05.047 "state": "online", 00:18:05.047 "raid_level": "raid0", 00:18:05.047 "superblock": true, 00:18:05.047 "num_base_bdevs": 3, 00:18:05.047 "num_base_bdevs_discovered": 3, 00:18:05.047 "num_base_bdevs_operational": 3, 00:18:05.047 "base_bdevs_list": [ 00:18:05.047 { 00:18:05.047 "name": "BaseBdev1", 00:18:05.047 "uuid": "78f57c7c-5f2b-4411-ac61-4dada197ac2a", 00:18:05.047 "is_configured": true, 00:18:05.047 "data_offset": 2048, 00:18:05.047 "data_size": 63488 00:18:05.047 }, 00:18:05.047 { 00:18:05.047 "name": "BaseBdev2", 00:18:05.047 "uuid": "8f312e77-673e-4745-bb10-907eab7ee1a7", 00:18:05.047 "is_configured": true, 00:18:05.047 "data_offset": 2048, 00:18:05.047 "data_size": 63488 00:18:05.047 }, 00:18:05.047 { 00:18:05.047 "name": "BaseBdev3", 00:18:05.047 "uuid": "45897383-7c7e-4469-a1a1-ba6e1c65fcee", 00:18:05.047 "is_configured": true, 00:18:05.047 "data_offset": 2048, 00:18:05.047 "data_size": 63488 00:18:05.047 } 00:18:05.047 ] 00:18:05.047 }' 00:18:05.047 11:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.047 11:29:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.614 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:05.614 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:05.614 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:05.614 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:05.614 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:05.614 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:05.614 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:05.614 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:05.873 [2024-07-13 11:29:40.590345] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.873 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:05.873 "name": "Existed_Raid", 00:18:05.873 "aliases": [ 00:18:05.873 "f8d32e4d-40b7-43d0-bc76-0d0df0462412" 00:18:05.873 ], 00:18:05.873 "product_name": "Raid Volume", 00:18:05.873 "block_size": 512, 00:18:05.873 "num_blocks": 190464, 00:18:05.873 "uuid": "f8d32e4d-40b7-43d0-bc76-0d0df0462412", 00:18:05.873 "assigned_rate_limits": { 00:18:05.873 "rw_ios_per_sec": 0, 00:18:05.873 "rw_mbytes_per_sec": 0, 00:18:05.873 "r_mbytes_per_sec": 0, 00:18:05.873 "w_mbytes_per_sec": 0 00:18:05.873 }, 00:18:05.873 "claimed": false, 00:18:05.873 "zoned": false, 00:18:05.873 "supported_io_types": { 00:18:05.873 "read": true, 00:18:05.873 "write": true, 00:18:05.873 "unmap": true, 00:18:05.873 "flush": true, 00:18:05.873 "reset": true, 00:18:05.873 "nvme_admin": false, 00:18:05.873 "nvme_io": false, 00:18:05.873 "nvme_io_md": false, 00:18:05.873 "write_zeroes": true, 00:18:05.873 "zcopy": false, 00:18:05.873 "get_zone_info": false, 00:18:05.873 "zone_management": false, 00:18:05.873 "zone_append": false, 00:18:05.873 "compare": false, 00:18:05.873 "compare_and_write": false, 00:18:05.873 "abort": false, 00:18:05.873 "seek_hole": false, 00:18:05.873 "seek_data": false, 00:18:05.873 "copy": false, 00:18:05.873 "nvme_iov_md": false 00:18:05.873 }, 00:18:05.873 "memory_domains": [ 00:18:05.873 { 00:18:05.873 "dma_device_id": "system", 00:18:05.873 "dma_device_type": 1 00:18:05.873 }, 00:18:05.873 { 00:18:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.873 "dma_device_type": 2 00:18:05.873 }, 00:18:05.873 { 00:18:05.873 "dma_device_id": "system", 00:18:05.873 "dma_device_type": 1 00:18:05.873 }, 00:18:05.873 { 00:18:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.873 "dma_device_type": 2 00:18:05.873 }, 00:18:05.873 { 00:18:05.873 "dma_device_id": "system", 00:18:05.873 "dma_device_type": 1 00:18:05.873 }, 00:18:05.873 { 00:18:05.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.873 "dma_device_type": 2 00:18:05.873 } 00:18:05.873 ], 00:18:05.873 "driver_specific": { 00:18:05.873 "raid": { 00:18:05.873 "uuid": "f8d32e4d-40b7-43d0-bc76-0d0df0462412", 00:18:05.873 "strip_size_kb": 64, 00:18:05.873 "state": "online", 00:18:05.873 "raid_level": "raid0", 00:18:05.873 "superblock": true, 00:18:05.873 "num_base_bdevs": 3, 00:18:05.873 "num_base_bdevs_discovered": 3, 00:18:05.873 "num_base_bdevs_operational": 3, 00:18:05.873 "base_bdevs_list": [ 00:18:05.873 { 00:18:05.873 "name": "BaseBdev1", 00:18:05.873 "uuid": "78f57c7c-5f2b-4411-ac61-4dada197ac2a", 00:18:05.873 "is_configured": true, 00:18:05.873 "data_offset": 2048, 00:18:05.873 "data_size": 63488 00:18:05.873 }, 00:18:05.873 { 00:18:05.873 "name": "BaseBdev2", 00:18:05.873 "uuid": "8f312e77-673e-4745-bb10-907eab7ee1a7", 00:18:05.873 "is_configured": true, 00:18:05.873 "data_offset": 2048, 00:18:05.873 "data_size": 63488 00:18:05.873 }, 00:18:05.873 { 00:18:05.873 "name": "BaseBdev3", 00:18:05.873 "uuid": "45897383-7c7e-4469-a1a1-ba6e1c65fcee", 00:18:05.873 "is_configured": true, 00:18:05.873 "data_offset": 2048, 00:18:05.873 "data_size": 63488 00:18:05.873 } 00:18:05.873 ] 00:18:05.873 } 00:18:05.873 } 00:18:05.873 }' 00:18:05.873 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:06.132 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:06.132 BaseBdev2 00:18:06.132 BaseBdev3' 00:18:06.132 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.132 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:06.132 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.132 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.132 "name": "BaseBdev1", 00:18:06.132 "aliases": [ 00:18:06.132 "78f57c7c-5f2b-4411-ac61-4dada197ac2a" 00:18:06.132 ], 00:18:06.132 "product_name": "Malloc disk", 00:18:06.132 "block_size": 512, 00:18:06.132 "num_blocks": 65536, 00:18:06.132 "uuid": "78f57c7c-5f2b-4411-ac61-4dada197ac2a", 00:18:06.132 "assigned_rate_limits": { 00:18:06.132 "rw_ios_per_sec": 0, 00:18:06.132 "rw_mbytes_per_sec": 0, 00:18:06.132 "r_mbytes_per_sec": 0, 00:18:06.132 "w_mbytes_per_sec": 0 00:18:06.132 }, 00:18:06.132 "claimed": true, 00:18:06.132 "claim_type": "exclusive_write", 00:18:06.132 "zoned": false, 00:18:06.132 "supported_io_types": { 00:18:06.132 "read": true, 00:18:06.132 "write": true, 00:18:06.132 "unmap": true, 00:18:06.132 "flush": true, 00:18:06.132 "reset": true, 00:18:06.132 "nvme_admin": false, 00:18:06.132 "nvme_io": false, 00:18:06.132 "nvme_io_md": false, 00:18:06.132 "write_zeroes": true, 00:18:06.132 "zcopy": true, 00:18:06.132 "get_zone_info": false, 00:18:06.132 "zone_management": false, 00:18:06.132 "zone_append": false, 00:18:06.132 "compare": false, 00:18:06.132 "compare_and_write": false, 00:18:06.132 "abort": true, 00:18:06.132 "seek_hole": false, 00:18:06.132 "seek_data": false, 00:18:06.132 "copy": true, 00:18:06.132 "nvme_iov_md": false 00:18:06.132 }, 00:18:06.132 "memory_domains": [ 00:18:06.132 { 00:18:06.132 "dma_device_id": "system", 00:18:06.132 "dma_device_type": 1 00:18:06.132 }, 00:18:06.132 { 00:18:06.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.132 "dma_device_type": 2 00:18:06.132 } 00:18:06.132 ], 00:18:06.132 "driver_specific": {} 00:18:06.132 }' 00:18:06.132 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.391 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.391 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.391 11:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.391 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.391 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.391 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.391 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.649 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.649 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.649 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.649 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.649 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.649 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:06.649 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.908 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.908 "name": "BaseBdev2", 00:18:06.908 "aliases": [ 00:18:06.908 "8f312e77-673e-4745-bb10-907eab7ee1a7" 00:18:06.908 ], 00:18:06.908 "product_name": "Malloc disk", 00:18:06.908 "block_size": 512, 00:18:06.908 "num_blocks": 65536, 00:18:06.908 "uuid": "8f312e77-673e-4745-bb10-907eab7ee1a7", 00:18:06.908 "assigned_rate_limits": { 00:18:06.908 "rw_ios_per_sec": 0, 00:18:06.908 "rw_mbytes_per_sec": 0, 00:18:06.908 "r_mbytes_per_sec": 0, 00:18:06.908 "w_mbytes_per_sec": 0 00:18:06.908 }, 00:18:06.908 "claimed": true, 00:18:06.908 "claim_type": "exclusive_write", 00:18:06.908 "zoned": false, 00:18:06.908 "supported_io_types": { 00:18:06.908 "read": true, 00:18:06.908 "write": true, 00:18:06.908 "unmap": true, 00:18:06.908 "flush": true, 00:18:06.908 "reset": true, 00:18:06.908 "nvme_admin": false, 00:18:06.908 "nvme_io": false, 00:18:06.908 "nvme_io_md": false, 00:18:06.908 "write_zeroes": true, 00:18:06.908 "zcopy": true, 00:18:06.908 "get_zone_info": false, 00:18:06.908 "zone_management": false, 00:18:06.908 "zone_append": false, 00:18:06.908 "compare": false, 00:18:06.908 "compare_and_write": false, 00:18:06.908 "abort": true, 00:18:06.908 "seek_hole": false, 00:18:06.908 "seek_data": false, 00:18:06.908 "copy": true, 00:18:06.908 "nvme_iov_md": false 00:18:06.908 }, 00:18:06.908 "memory_domains": [ 00:18:06.908 { 00:18:06.908 "dma_device_id": "system", 00:18:06.908 "dma_device_type": 1 00:18:06.908 }, 00:18:06.908 { 00:18:06.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.908 "dma_device_type": 2 00:18:06.908 } 00:18:06.908 ], 00:18:06.908 "driver_specific": {} 00:18:06.908 }' 00:18:06.908 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.908 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.166 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:07.166 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.166 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.166 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:07.166 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.166 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.166 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.166 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.425 11:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.425 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.425 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:07.425 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:07.425 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:07.684 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:07.684 "name": "BaseBdev3", 00:18:07.684 "aliases": [ 00:18:07.684 "45897383-7c7e-4469-a1a1-ba6e1c65fcee" 00:18:07.684 ], 00:18:07.684 "product_name": "Malloc disk", 00:18:07.684 "block_size": 512, 00:18:07.684 "num_blocks": 65536, 00:18:07.684 "uuid": "45897383-7c7e-4469-a1a1-ba6e1c65fcee", 00:18:07.684 "assigned_rate_limits": { 00:18:07.684 "rw_ios_per_sec": 0, 00:18:07.684 "rw_mbytes_per_sec": 0, 00:18:07.684 "r_mbytes_per_sec": 0, 00:18:07.684 "w_mbytes_per_sec": 0 00:18:07.684 }, 00:18:07.684 "claimed": true, 00:18:07.684 "claim_type": "exclusive_write", 00:18:07.684 "zoned": false, 00:18:07.684 "supported_io_types": { 00:18:07.684 "read": true, 00:18:07.684 "write": true, 00:18:07.684 "unmap": true, 00:18:07.684 "flush": true, 00:18:07.684 "reset": true, 00:18:07.684 "nvme_admin": false, 00:18:07.684 "nvme_io": false, 00:18:07.684 "nvme_io_md": false, 00:18:07.684 "write_zeroes": true, 00:18:07.684 "zcopy": true, 00:18:07.684 "get_zone_info": false, 00:18:07.684 "zone_management": false, 00:18:07.684 "zone_append": false, 00:18:07.684 "compare": false, 00:18:07.684 "compare_and_write": false, 00:18:07.684 "abort": true, 00:18:07.684 "seek_hole": false, 00:18:07.684 "seek_data": false, 00:18:07.684 "copy": true, 00:18:07.684 "nvme_iov_md": false 00:18:07.684 }, 00:18:07.684 "memory_domains": [ 00:18:07.684 { 00:18:07.684 "dma_device_id": "system", 00:18:07.684 "dma_device_type": 1 00:18:07.684 }, 00:18:07.684 { 00:18:07.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.684 "dma_device_type": 2 00:18:07.684 } 00:18:07.684 ], 00:18:07.684 "driver_specific": {} 00:18:07.684 }' 00:18:07.684 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.684 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.684 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:07.684 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.942 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.942 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:07.942 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.942 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.942 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.942 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.942 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:08.201 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:08.201 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:08.201 [2024-07-13 11:29:42.906642] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:08.201 [2024-07-13 11:29:42.906672] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.201 [2024-07-13 11:29:42.906741] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.458 11:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.716 11:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.716 "name": "Existed_Raid", 00:18:08.716 "uuid": "f8d32e4d-40b7-43d0-bc76-0d0df0462412", 00:18:08.716 "strip_size_kb": 64, 00:18:08.716 "state": "offline", 00:18:08.716 "raid_level": "raid0", 00:18:08.716 "superblock": true, 00:18:08.716 "num_base_bdevs": 3, 00:18:08.716 "num_base_bdevs_discovered": 2, 00:18:08.716 "num_base_bdevs_operational": 2, 00:18:08.716 "base_bdevs_list": [ 00:18:08.716 { 00:18:08.716 "name": null, 00:18:08.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.716 "is_configured": false, 00:18:08.716 "data_offset": 2048, 00:18:08.717 "data_size": 63488 00:18:08.717 }, 00:18:08.717 { 00:18:08.717 "name": "BaseBdev2", 00:18:08.717 "uuid": "8f312e77-673e-4745-bb10-907eab7ee1a7", 00:18:08.717 "is_configured": true, 00:18:08.717 "data_offset": 2048, 00:18:08.717 "data_size": 63488 00:18:08.717 }, 00:18:08.717 { 00:18:08.717 "name": "BaseBdev3", 00:18:08.717 "uuid": "45897383-7c7e-4469-a1a1-ba6e1c65fcee", 00:18:08.717 "is_configured": true, 00:18:08.717 "data_offset": 2048, 00:18:08.717 "data_size": 63488 00:18:08.717 } 00:18:08.717 ] 00:18:08.717 }' 00:18:08.717 11:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.717 11:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.282 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:09.282 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:09.282 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.282 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:09.540 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:09.540 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:09.540 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:09.797 [2024-07-13 11:29:44.535329] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:10.054 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:10.054 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:10.054 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.054 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:10.311 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:10.311 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:10.311 11:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:10.311 [2024-07-13 11:29:45.042329] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:10.311 [2024-07-13 11:29:45.042391] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:10.568 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:10.825 BaseBdev2 00:18:10.825 11:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:10.825 11:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:10.825 11:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:10.825 11:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:10.825 11:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:10.825 11:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:10.825 11:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.084 11:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:11.342 [ 00:18:11.342 { 00:18:11.342 "name": "BaseBdev2", 00:18:11.342 "aliases": [ 00:18:11.342 "bd0116f5-fbcf-451e-8515-f0898d86efc3" 00:18:11.342 ], 00:18:11.342 "product_name": "Malloc disk", 00:18:11.342 "block_size": 512, 00:18:11.342 "num_blocks": 65536, 00:18:11.342 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:11.342 "assigned_rate_limits": { 00:18:11.342 "rw_ios_per_sec": 0, 00:18:11.342 "rw_mbytes_per_sec": 0, 00:18:11.342 "r_mbytes_per_sec": 0, 00:18:11.342 "w_mbytes_per_sec": 0 00:18:11.342 }, 00:18:11.342 "claimed": false, 00:18:11.342 "zoned": false, 00:18:11.342 "supported_io_types": { 00:18:11.342 "read": true, 00:18:11.342 "write": true, 00:18:11.342 "unmap": true, 00:18:11.342 "flush": true, 00:18:11.342 "reset": true, 00:18:11.342 "nvme_admin": false, 00:18:11.342 "nvme_io": false, 00:18:11.342 "nvme_io_md": false, 00:18:11.342 "write_zeroes": true, 00:18:11.342 "zcopy": true, 00:18:11.342 "get_zone_info": false, 00:18:11.342 "zone_management": false, 00:18:11.342 "zone_append": false, 00:18:11.342 "compare": false, 00:18:11.342 "compare_and_write": false, 00:18:11.342 "abort": true, 00:18:11.342 "seek_hole": false, 00:18:11.342 "seek_data": false, 00:18:11.342 "copy": true, 00:18:11.342 "nvme_iov_md": false 00:18:11.342 }, 00:18:11.342 "memory_domains": [ 00:18:11.342 { 00:18:11.342 "dma_device_id": "system", 00:18:11.342 "dma_device_type": 1 00:18:11.342 }, 00:18:11.342 { 00:18:11.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.342 "dma_device_type": 2 00:18:11.342 } 00:18:11.342 ], 00:18:11.342 "driver_specific": {} 00:18:11.342 } 00:18:11.342 ] 00:18:11.342 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:11.342 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:11.342 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:11.342 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:11.599 BaseBdev3 00:18:11.599 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:11.599 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:11.599 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:11.599 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:11.599 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:11.599 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:11.599 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:11.856 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:12.113 [ 00:18:12.113 { 00:18:12.113 "name": "BaseBdev3", 00:18:12.113 "aliases": [ 00:18:12.113 "82b3e8bd-da42-42ec-ac19-4b9db94b1514" 00:18:12.113 ], 00:18:12.113 "product_name": "Malloc disk", 00:18:12.113 "block_size": 512, 00:18:12.113 "num_blocks": 65536, 00:18:12.113 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:12.113 "assigned_rate_limits": { 00:18:12.113 "rw_ios_per_sec": 0, 00:18:12.113 "rw_mbytes_per_sec": 0, 00:18:12.113 "r_mbytes_per_sec": 0, 00:18:12.113 "w_mbytes_per_sec": 0 00:18:12.113 }, 00:18:12.113 "claimed": false, 00:18:12.113 "zoned": false, 00:18:12.113 "supported_io_types": { 00:18:12.113 "read": true, 00:18:12.113 "write": true, 00:18:12.113 "unmap": true, 00:18:12.113 "flush": true, 00:18:12.113 "reset": true, 00:18:12.113 "nvme_admin": false, 00:18:12.113 "nvme_io": false, 00:18:12.113 "nvme_io_md": false, 00:18:12.113 "write_zeroes": true, 00:18:12.113 "zcopy": true, 00:18:12.113 "get_zone_info": false, 00:18:12.113 "zone_management": false, 00:18:12.113 "zone_append": false, 00:18:12.113 "compare": false, 00:18:12.113 "compare_and_write": false, 00:18:12.113 "abort": true, 00:18:12.113 "seek_hole": false, 00:18:12.113 "seek_data": false, 00:18:12.114 "copy": true, 00:18:12.114 "nvme_iov_md": false 00:18:12.114 }, 00:18:12.114 "memory_domains": [ 00:18:12.114 { 00:18:12.114 "dma_device_id": "system", 00:18:12.114 "dma_device_type": 1 00:18:12.114 }, 00:18:12.114 { 00:18:12.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.114 "dma_device_type": 2 00:18:12.114 } 00:18:12.114 ], 00:18:12.114 "driver_specific": {} 00:18:12.114 } 00:18:12.114 ] 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:12.114 [2024-07-13 11:29:46.789564] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:12.114 [2024-07-13 11:29:46.789646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:12.114 [2024-07-13 11:29:46.789693] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:12.114 [2024-07-13 11:29:46.791595] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.114 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.371 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.371 "name": "Existed_Raid", 00:18:12.371 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:12.371 "strip_size_kb": 64, 00:18:12.371 "state": "configuring", 00:18:12.371 "raid_level": "raid0", 00:18:12.371 "superblock": true, 00:18:12.371 "num_base_bdevs": 3, 00:18:12.371 "num_base_bdevs_discovered": 2, 00:18:12.371 "num_base_bdevs_operational": 3, 00:18:12.371 "base_bdevs_list": [ 00:18:12.371 { 00:18:12.371 "name": "BaseBdev1", 00:18:12.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.371 "is_configured": false, 00:18:12.371 "data_offset": 0, 00:18:12.371 "data_size": 0 00:18:12.371 }, 00:18:12.371 { 00:18:12.371 "name": "BaseBdev2", 00:18:12.371 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:12.371 "is_configured": true, 00:18:12.371 "data_offset": 2048, 00:18:12.371 "data_size": 63488 00:18:12.371 }, 00:18:12.371 { 00:18:12.371 "name": "BaseBdev3", 00:18:12.371 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:12.371 "is_configured": true, 00:18:12.371 "data_offset": 2048, 00:18:12.371 "data_size": 63488 00:18:12.371 } 00:18:12.371 ] 00:18:12.371 }' 00:18:12.372 11:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.372 11:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:13.306 [2024-07-13 11:29:47.885677] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.306 11:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.564 11:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.565 "name": "Existed_Raid", 00:18:13.565 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:13.565 "strip_size_kb": 64, 00:18:13.565 "state": "configuring", 00:18:13.565 "raid_level": "raid0", 00:18:13.565 "superblock": true, 00:18:13.565 "num_base_bdevs": 3, 00:18:13.565 "num_base_bdevs_discovered": 1, 00:18:13.565 "num_base_bdevs_operational": 3, 00:18:13.565 "base_bdevs_list": [ 00:18:13.565 { 00:18:13.565 "name": "BaseBdev1", 00:18:13.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.565 "is_configured": false, 00:18:13.565 "data_offset": 0, 00:18:13.565 "data_size": 0 00:18:13.565 }, 00:18:13.565 { 00:18:13.565 "name": null, 00:18:13.565 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:13.565 "is_configured": false, 00:18:13.565 "data_offset": 2048, 00:18:13.565 "data_size": 63488 00:18:13.565 }, 00:18:13.565 { 00:18:13.565 "name": "BaseBdev3", 00:18:13.565 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:13.565 "is_configured": true, 00:18:13.565 "data_offset": 2048, 00:18:13.565 "data_size": 63488 00:18:13.565 } 00:18:13.565 ] 00:18:13.565 }' 00:18:13.565 11:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.565 11:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:14.131 11:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.131 11:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:14.389 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:14.389 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:14.647 [2024-07-13 11:29:49.227455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.647 BaseBdev1 00:18:14.647 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:14.647 11:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:14.647 11:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:14.647 11:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:14.647 11:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:14.647 11:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:14.647 11:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:14.905 11:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:15.162 [ 00:18:15.162 { 00:18:15.162 "name": "BaseBdev1", 00:18:15.162 "aliases": [ 00:18:15.162 "bb22e678-f211-47ca-b522-9d40868654d0" 00:18:15.162 ], 00:18:15.162 "product_name": "Malloc disk", 00:18:15.162 "block_size": 512, 00:18:15.162 "num_blocks": 65536, 00:18:15.162 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:15.162 "assigned_rate_limits": { 00:18:15.162 "rw_ios_per_sec": 0, 00:18:15.162 "rw_mbytes_per_sec": 0, 00:18:15.162 "r_mbytes_per_sec": 0, 00:18:15.162 "w_mbytes_per_sec": 0 00:18:15.162 }, 00:18:15.162 "claimed": true, 00:18:15.162 "claim_type": "exclusive_write", 00:18:15.162 "zoned": false, 00:18:15.162 "supported_io_types": { 00:18:15.162 "read": true, 00:18:15.162 "write": true, 00:18:15.162 "unmap": true, 00:18:15.162 "flush": true, 00:18:15.162 "reset": true, 00:18:15.162 "nvme_admin": false, 00:18:15.162 "nvme_io": false, 00:18:15.162 "nvme_io_md": false, 00:18:15.162 "write_zeroes": true, 00:18:15.162 "zcopy": true, 00:18:15.162 "get_zone_info": false, 00:18:15.162 "zone_management": false, 00:18:15.162 "zone_append": false, 00:18:15.162 "compare": false, 00:18:15.162 "compare_and_write": false, 00:18:15.162 "abort": true, 00:18:15.162 "seek_hole": false, 00:18:15.162 "seek_data": false, 00:18:15.162 "copy": true, 00:18:15.162 "nvme_iov_md": false 00:18:15.162 }, 00:18:15.162 "memory_domains": [ 00:18:15.162 { 00:18:15.162 "dma_device_id": "system", 00:18:15.162 "dma_device_type": 1 00:18:15.162 }, 00:18:15.162 { 00:18:15.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.162 "dma_device_type": 2 00:18:15.162 } 00:18:15.162 ], 00:18:15.162 "driver_specific": {} 00:18:15.162 } 00:18:15.162 ] 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.162 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.421 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.421 "name": "Existed_Raid", 00:18:15.421 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:15.421 "strip_size_kb": 64, 00:18:15.421 "state": "configuring", 00:18:15.421 "raid_level": "raid0", 00:18:15.421 "superblock": true, 00:18:15.421 "num_base_bdevs": 3, 00:18:15.421 "num_base_bdevs_discovered": 2, 00:18:15.421 "num_base_bdevs_operational": 3, 00:18:15.421 "base_bdevs_list": [ 00:18:15.421 { 00:18:15.421 "name": "BaseBdev1", 00:18:15.421 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:15.421 "is_configured": true, 00:18:15.421 "data_offset": 2048, 00:18:15.421 "data_size": 63488 00:18:15.421 }, 00:18:15.421 { 00:18:15.421 "name": null, 00:18:15.421 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:15.421 "is_configured": false, 00:18:15.421 "data_offset": 2048, 00:18:15.421 "data_size": 63488 00:18:15.421 }, 00:18:15.421 { 00:18:15.421 "name": "BaseBdev3", 00:18:15.421 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:15.421 "is_configured": true, 00:18:15.421 "data_offset": 2048, 00:18:15.421 "data_size": 63488 00:18:15.421 } 00:18:15.421 ] 00:18:15.421 }' 00:18:15.421 11:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.421 11:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.986 11:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.986 11:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:16.245 11:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:16.245 11:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:16.503 [2024-07-13 11:29:51.079825] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.503 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.763 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.763 "name": "Existed_Raid", 00:18:16.763 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:16.763 "strip_size_kb": 64, 00:18:16.763 "state": "configuring", 00:18:16.763 "raid_level": "raid0", 00:18:16.763 "superblock": true, 00:18:16.763 "num_base_bdevs": 3, 00:18:16.763 "num_base_bdevs_discovered": 1, 00:18:16.763 "num_base_bdevs_operational": 3, 00:18:16.763 "base_bdevs_list": [ 00:18:16.763 { 00:18:16.763 "name": "BaseBdev1", 00:18:16.763 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:16.763 "is_configured": true, 00:18:16.763 "data_offset": 2048, 00:18:16.763 "data_size": 63488 00:18:16.763 }, 00:18:16.763 { 00:18:16.763 "name": null, 00:18:16.763 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:16.763 "is_configured": false, 00:18:16.763 "data_offset": 2048, 00:18:16.763 "data_size": 63488 00:18:16.763 }, 00:18:16.763 { 00:18:16.763 "name": null, 00:18:16.763 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:16.763 "is_configured": false, 00:18:16.763 "data_offset": 2048, 00:18:16.763 "data_size": 63488 00:18:16.763 } 00:18:16.763 ] 00:18:16.763 }' 00:18:16.763 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.763 11:29:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:17.329 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.329 11:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:17.587 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:17.587 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:17.845 [2024-07-13 11:29:52.440057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.845 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:17.845 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:17.845 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:17.845 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:17.845 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:17.845 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:17.845 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.845 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.846 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.846 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.846 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.846 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.104 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.104 "name": "Existed_Raid", 00:18:18.104 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:18.104 "strip_size_kb": 64, 00:18:18.104 "state": "configuring", 00:18:18.104 "raid_level": "raid0", 00:18:18.104 "superblock": true, 00:18:18.104 "num_base_bdevs": 3, 00:18:18.104 "num_base_bdevs_discovered": 2, 00:18:18.104 "num_base_bdevs_operational": 3, 00:18:18.104 "base_bdevs_list": [ 00:18:18.104 { 00:18:18.104 "name": "BaseBdev1", 00:18:18.104 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:18.104 "is_configured": true, 00:18:18.104 "data_offset": 2048, 00:18:18.104 "data_size": 63488 00:18:18.104 }, 00:18:18.104 { 00:18:18.104 "name": null, 00:18:18.104 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:18.104 "is_configured": false, 00:18:18.104 "data_offset": 2048, 00:18:18.104 "data_size": 63488 00:18:18.104 }, 00:18:18.104 { 00:18:18.104 "name": "BaseBdev3", 00:18:18.104 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:18.104 "is_configured": true, 00:18:18.104 "data_offset": 2048, 00:18:18.104 "data_size": 63488 00:18:18.104 } 00:18:18.104 ] 00:18:18.104 }' 00:18:18.104 11:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.104 11:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:18.669 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.669 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:18.927 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:18.927 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:19.186 [2024-07-13 11:29:53.680326] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.186 11:29:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.443 11:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:19.443 "name": "Existed_Raid", 00:18:19.443 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:19.443 "strip_size_kb": 64, 00:18:19.443 "state": "configuring", 00:18:19.443 "raid_level": "raid0", 00:18:19.443 "superblock": true, 00:18:19.443 "num_base_bdevs": 3, 00:18:19.443 "num_base_bdevs_discovered": 1, 00:18:19.443 "num_base_bdevs_operational": 3, 00:18:19.443 "base_bdevs_list": [ 00:18:19.443 { 00:18:19.443 "name": null, 00:18:19.443 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:19.443 "is_configured": false, 00:18:19.443 "data_offset": 2048, 00:18:19.443 "data_size": 63488 00:18:19.443 }, 00:18:19.443 { 00:18:19.443 "name": null, 00:18:19.443 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:19.443 "is_configured": false, 00:18:19.443 "data_offset": 2048, 00:18:19.443 "data_size": 63488 00:18:19.443 }, 00:18:19.443 { 00:18:19.443 "name": "BaseBdev3", 00:18:19.444 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:19.444 "is_configured": true, 00:18:19.444 "data_offset": 2048, 00:18:19.444 "data_size": 63488 00:18:19.444 } 00:18:19.444 ] 00:18:19.444 }' 00:18:19.444 11:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:19.444 11:29:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:20.009 11:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.009 11:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:20.267 11:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:20.267 11:29:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:20.526 [2024-07-13 11:29:55.099094] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.526 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.785 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.785 "name": "Existed_Raid", 00:18:20.785 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:20.785 "strip_size_kb": 64, 00:18:20.785 "state": "configuring", 00:18:20.785 "raid_level": "raid0", 00:18:20.785 "superblock": true, 00:18:20.785 "num_base_bdevs": 3, 00:18:20.785 "num_base_bdevs_discovered": 2, 00:18:20.785 "num_base_bdevs_operational": 3, 00:18:20.785 "base_bdevs_list": [ 00:18:20.785 { 00:18:20.785 "name": null, 00:18:20.785 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:20.785 "is_configured": false, 00:18:20.785 "data_offset": 2048, 00:18:20.785 "data_size": 63488 00:18:20.785 }, 00:18:20.785 { 00:18:20.785 "name": "BaseBdev2", 00:18:20.785 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:20.785 "is_configured": true, 00:18:20.785 "data_offset": 2048, 00:18:20.785 "data_size": 63488 00:18:20.785 }, 00:18:20.785 { 00:18:20.785 "name": "BaseBdev3", 00:18:20.785 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:20.785 "is_configured": true, 00:18:20.785 "data_offset": 2048, 00:18:20.785 "data_size": 63488 00:18:20.785 } 00:18:20.785 ] 00:18:20.785 }' 00:18:20.785 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.785 11:29:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.353 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.353 11:29:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:21.612 11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:21.612 11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:21.612 11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.870 11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bb22e678-f211-47ca-b522-9d40868654d0 00:18:22.129 [2024-07-13 11:29:56.662075] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:22.129 [2024-07-13 11:29:56.662280] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:18:22.129 [2024-07-13 11:29:56.662295] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:22.129 [2024-07-13 11:29:56.662399] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:22.129 NewBaseBdev 00:18:22.129 [2024-07-13 11:29:56.662698] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:18:22.129 [2024-07-13 11:29:56.662731] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:18:22.129 [2024-07-13 11:29:56.662893] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.129 11:29:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:22.129 11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:22.129 11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:22.129 11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:22.129 11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:22.129 11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:22.129 11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.129 11:29:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:22.388 [ 00:18:22.388 { 00:18:22.388 "name": "NewBaseBdev", 00:18:22.389 "aliases": [ 00:18:22.389 "bb22e678-f211-47ca-b522-9d40868654d0" 00:18:22.389 ], 00:18:22.389 "product_name": "Malloc disk", 00:18:22.389 "block_size": 512, 00:18:22.389 "num_blocks": 65536, 00:18:22.389 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:22.389 "assigned_rate_limits": { 00:18:22.389 "rw_ios_per_sec": 0, 00:18:22.389 "rw_mbytes_per_sec": 0, 00:18:22.389 "r_mbytes_per_sec": 0, 00:18:22.389 "w_mbytes_per_sec": 0 00:18:22.389 }, 00:18:22.389 "claimed": true, 00:18:22.389 "claim_type": "exclusive_write", 00:18:22.389 "zoned": false, 00:18:22.389 "supported_io_types": { 00:18:22.389 "read": true, 00:18:22.389 "write": true, 00:18:22.389 "unmap": true, 00:18:22.389 "flush": true, 00:18:22.389 "reset": true, 00:18:22.389 "nvme_admin": false, 00:18:22.389 "nvme_io": false, 00:18:22.389 "nvme_io_md": false, 00:18:22.389 "write_zeroes": true, 00:18:22.389 "zcopy": true, 00:18:22.389 "get_zone_info": false, 00:18:22.389 "zone_management": false, 00:18:22.389 "zone_append": false, 00:18:22.389 "compare": false, 00:18:22.389 "compare_and_write": false, 00:18:22.389 "abort": true, 00:18:22.389 "seek_hole": false, 00:18:22.389 "seek_data": false, 00:18:22.389 "copy": true, 00:18:22.389 "nvme_iov_md": false 00:18:22.389 }, 00:18:22.389 "memory_domains": [ 00:18:22.389 { 00:18:22.389 "dma_device_id": "system", 00:18:22.389 "dma_device_type": 1 00:18:22.389 }, 00:18:22.389 { 00:18:22.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.389 "dma_device_type": 2 00:18:22.389 } 00:18:22.389 ], 00:18:22.389 "driver_specific": {} 00:18:22.389 } 00:18:22.389 ] 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.389 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.647 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:22.647 "name": "Existed_Raid", 00:18:22.647 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:22.647 "strip_size_kb": 64, 00:18:22.647 "state": "online", 00:18:22.647 "raid_level": "raid0", 00:18:22.647 "superblock": true, 00:18:22.647 "num_base_bdevs": 3, 00:18:22.647 "num_base_bdevs_discovered": 3, 00:18:22.647 "num_base_bdevs_operational": 3, 00:18:22.647 "base_bdevs_list": [ 00:18:22.647 { 00:18:22.647 "name": "NewBaseBdev", 00:18:22.647 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:22.647 "is_configured": true, 00:18:22.647 "data_offset": 2048, 00:18:22.647 "data_size": 63488 00:18:22.647 }, 00:18:22.647 { 00:18:22.647 "name": "BaseBdev2", 00:18:22.647 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:22.647 "is_configured": true, 00:18:22.647 "data_offset": 2048, 00:18:22.647 "data_size": 63488 00:18:22.647 }, 00:18:22.647 { 00:18:22.647 "name": "BaseBdev3", 00:18:22.647 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:22.647 "is_configured": true, 00:18:22.647 "data_offset": 2048, 00:18:22.647 "data_size": 63488 00:18:22.647 } 00:18:22.647 ] 00:18:22.647 }' 00:18:22.647 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:22.647 11:29:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.215 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:23.215 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:23.215 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:23.215 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:23.215 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:23.215 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:23.215 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:23.215 11:29:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:23.473 [2024-07-13 11:29:58.038558] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:23.473 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:23.473 "name": "Existed_Raid", 00:18:23.473 "aliases": [ 00:18:23.473 "ea57d3dc-536e-412b-b7d5-a90d22c8a044" 00:18:23.473 ], 00:18:23.473 "product_name": "Raid Volume", 00:18:23.473 "block_size": 512, 00:18:23.473 "num_blocks": 190464, 00:18:23.473 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:23.473 "assigned_rate_limits": { 00:18:23.473 "rw_ios_per_sec": 0, 00:18:23.473 "rw_mbytes_per_sec": 0, 00:18:23.473 "r_mbytes_per_sec": 0, 00:18:23.473 "w_mbytes_per_sec": 0 00:18:23.473 }, 00:18:23.473 "claimed": false, 00:18:23.473 "zoned": false, 00:18:23.473 "supported_io_types": { 00:18:23.473 "read": true, 00:18:23.473 "write": true, 00:18:23.473 "unmap": true, 00:18:23.473 "flush": true, 00:18:23.473 "reset": true, 00:18:23.473 "nvme_admin": false, 00:18:23.473 "nvme_io": false, 00:18:23.473 "nvme_io_md": false, 00:18:23.473 "write_zeroes": true, 00:18:23.473 "zcopy": false, 00:18:23.473 "get_zone_info": false, 00:18:23.473 "zone_management": false, 00:18:23.473 "zone_append": false, 00:18:23.473 "compare": false, 00:18:23.473 "compare_and_write": false, 00:18:23.473 "abort": false, 00:18:23.473 "seek_hole": false, 00:18:23.473 "seek_data": false, 00:18:23.473 "copy": false, 00:18:23.473 "nvme_iov_md": false 00:18:23.473 }, 00:18:23.474 "memory_domains": [ 00:18:23.474 { 00:18:23.474 "dma_device_id": "system", 00:18:23.474 "dma_device_type": 1 00:18:23.474 }, 00:18:23.474 { 00:18:23.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.474 "dma_device_type": 2 00:18:23.474 }, 00:18:23.474 { 00:18:23.474 "dma_device_id": "system", 00:18:23.474 "dma_device_type": 1 00:18:23.474 }, 00:18:23.474 { 00:18:23.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.474 "dma_device_type": 2 00:18:23.474 }, 00:18:23.474 { 00:18:23.474 "dma_device_id": "system", 00:18:23.474 "dma_device_type": 1 00:18:23.474 }, 00:18:23.474 { 00:18:23.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.474 "dma_device_type": 2 00:18:23.474 } 00:18:23.474 ], 00:18:23.474 "driver_specific": { 00:18:23.474 "raid": { 00:18:23.474 "uuid": "ea57d3dc-536e-412b-b7d5-a90d22c8a044", 00:18:23.474 "strip_size_kb": 64, 00:18:23.474 "state": "online", 00:18:23.474 "raid_level": "raid0", 00:18:23.474 "superblock": true, 00:18:23.474 "num_base_bdevs": 3, 00:18:23.474 "num_base_bdevs_discovered": 3, 00:18:23.474 "num_base_bdevs_operational": 3, 00:18:23.474 "base_bdevs_list": [ 00:18:23.474 { 00:18:23.474 "name": "NewBaseBdev", 00:18:23.474 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:23.474 "is_configured": true, 00:18:23.474 "data_offset": 2048, 00:18:23.474 "data_size": 63488 00:18:23.474 }, 00:18:23.474 { 00:18:23.474 "name": "BaseBdev2", 00:18:23.474 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:23.474 "is_configured": true, 00:18:23.474 "data_offset": 2048, 00:18:23.474 "data_size": 63488 00:18:23.474 }, 00:18:23.474 { 00:18:23.474 "name": "BaseBdev3", 00:18:23.474 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:23.474 "is_configured": true, 00:18:23.474 "data_offset": 2048, 00:18:23.474 "data_size": 63488 00:18:23.474 } 00:18:23.474 ] 00:18:23.474 } 00:18:23.474 } 00:18:23.474 }' 00:18:23.474 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.474 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:23.474 BaseBdev2 00:18:23.474 BaseBdev3' 00:18:23.474 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:23.474 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:23.474 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:23.733 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:23.733 "name": "NewBaseBdev", 00:18:23.733 "aliases": [ 00:18:23.733 "bb22e678-f211-47ca-b522-9d40868654d0" 00:18:23.733 ], 00:18:23.733 "product_name": "Malloc disk", 00:18:23.733 "block_size": 512, 00:18:23.733 "num_blocks": 65536, 00:18:23.733 "uuid": "bb22e678-f211-47ca-b522-9d40868654d0", 00:18:23.733 "assigned_rate_limits": { 00:18:23.733 "rw_ios_per_sec": 0, 00:18:23.733 "rw_mbytes_per_sec": 0, 00:18:23.733 "r_mbytes_per_sec": 0, 00:18:23.733 "w_mbytes_per_sec": 0 00:18:23.733 }, 00:18:23.733 "claimed": true, 00:18:23.733 "claim_type": "exclusive_write", 00:18:23.733 "zoned": false, 00:18:23.733 "supported_io_types": { 00:18:23.733 "read": true, 00:18:23.733 "write": true, 00:18:23.733 "unmap": true, 00:18:23.733 "flush": true, 00:18:23.733 "reset": true, 00:18:23.733 "nvme_admin": false, 00:18:23.733 "nvme_io": false, 00:18:23.733 "nvme_io_md": false, 00:18:23.733 "write_zeroes": true, 00:18:23.733 "zcopy": true, 00:18:23.733 "get_zone_info": false, 00:18:23.733 "zone_management": false, 00:18:23.733 "zone_append": false, 00:18:23.733 "compare": false, 00:18:23.733 "compare_and_write": false, 00:18:23.733 "abort": true, 00:18:23.733 "seek_hole": false, 00:18:23.733 "seek_data": false, 00:18:23.733 "copy": true, 00:18:23.733 "nvme_iov_md": false 00:18:23.733 }, 00:18:23.733 "memory_domains": [ 00:18:23.733 { 00:18:23.733 "dma_device_id": "system", 00:18:23.733 "dma_device_type": 1 00:18:23.733 }, 00:18:23.733 { 00:18:23.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.733 "dma_device_type": 2 00:18:23.733 } 00:18:23.733 ], 00:18:23.733 "driver_specific": {} 00:18:23.733 }' 00:18:23.733 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.733 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:23.733 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:23.733 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.733 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:23.991 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:23.991 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:23.991 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:23.991 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:23.991 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:23.991 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.250 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:24.250 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:24.250 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:24.250 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:24.250 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:24.250 "name": "BaseBdev2", 00:18:24.250 "aliases": [ 00:18:24.250 "bd0116f5-fbcf-451e-8515-f0898d86efc3" 00:18:24.250 ], 00:18:24.250 "product_name": "Malloc disk", 00:18:24.250 "block_size": 512, 00:18:24.250 "num_blocks": 65536, 00:18:24.250 "uuid": "bd0116f5-fbcf-451e-8515-f0898d86efc3", 00:18:24.250 "assigned_rate_limits": { 00:18:24.250 "rw_ios_per_sec": 0, 00:18:24.250 "rw_mbytes_per_sec": 0, 00:18:24.250 "r_mbytes_per_sec": 0, 00:18:24.250 "w_mbytes_per_sec": 0 00:18:24.250 }, 00:18:24.250 "claimed": true, 00:18:24.250 "claim_type": "exclusive_write", 00:18:24.250 "zoned": false, 00:18:24.250 "supported_io_types": { 00:18:24.250 "read": true, 00:18:24.250 "write": true, 00:18:24.250 "unmap": true, 00:18:24.250 "flush": true, 00:18:24.250 "reset": true, 00:18:24.250 "nvme_admin": false, 00:18:24.250 "nvme_io": false, 00:18:24.250 "nvme_io_md": false, 00:18:24.250 "write_zeroes": true, 00:18:24.250 "zcopy": true, 00:18:24.250 "get_zone_info": false, 00:18:24.250 "zone_management": false, 00:18:24.250 "zone_append": false, 00:18:24.250 "compare": false, 00:18:24.250 "compare_and_write": false, 00:18:24.250 "abort": true, 00:18:24.250 "seek_hole": false, 00:18:24.250 "seek_data": false, 00:18:24.250 "copy": true, 00:18:24.250 "nvme_iov_md": false 00:18:24.250 }, 00:18:24.250 "memory_domains": [ 00:18:24.250 { 00:18:24.250 "dma_device_id": "system", 00:18:24.250 "dma_device_type": 1 00:18:24.250 }, 00:18:24.250 { 00:18:24.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.250 "dma_device_type": 2 00:18:24.250 } 00:18:24.250 ], 00:18:24.250 "driver_specific": {} 00:18:24.250 }' 00:18:24.250 11:29:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:24.509 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:24.509 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:24.509 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:24.509 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:24.509 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:24.509 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:24.767 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:24.767 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:24.767 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.767 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:24.767 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:24.767 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:24.767 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:24.767 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:25.028 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:25.028 "name": "BaseBdev3", 00:18:25.028 "aliases": [ 00:18:25.028 "82b3e8bd-da42-42ec-ac19-4b9db94b1514" 00:18:25.028 ], 00:18:25.028 "product_name": "Malloc disk", 00:18:25.028 "block_size": 512, 00:18:25.028 "num_blocks": 65536, 00:18:25.028 "uuid": "82b3e8bd-da42-42ec-ac19-4b9db94b1514", 00:18:25.028 "assigned_rate_limits": { 00:18:25.028 "rw_ios_per_sec": 0, 00:18:25.028 "rw_mbytes_per_sec": 0, 00:18:25.028 "r_mbytes_per_sec": 0, 00:18:25.028 "w_mbytes_per_sec": 0 00:18:25.028 }, 00:18:25.028 "claimed": true, 00:18:25.028 "claim_type": "exclusive_write", 00:18:25.028 "zoned": false, 00:18:25.028 "supported_io_types": { 00:18:25.028 "read": true, 00:18:25.028 "write": true, 00:18:25.028 "unmap": true, 00:18:25.028 "flush": true, 00:18:25.028 "reset": true, 00:18:25.028 "nvme_admin": false, 00:18:25.028 "nvme_io": false, 00:18:25.028 "nvme_io_md": false, 00:18:25.028 "write_zeroes": true, 00:18:25.028 "zcopy": true, 00:18:25.028 "get_zone_info": false, 00:18:25.028 "zone_management": false, 00:18:25.028 "zone_append": false, 00:18:25.028 "compare": false, 00:18:25.028 "compare_and_write": false, 00:18:25.028 "abort": true, 00:18:25.028 "seek_hole": false, 00:18:25.028 "seek_data": false, 00:18:25.028 "copy": true, 00:18:25.028 "nvme_iov_md": false 00:18:25.028 }, 00:18:25.028 "memory_domains": [ 00:18:25.028 { 00:18:25.028 "dma_device_id": "system", 00:18:25.028 "dma_device_type": 1 00:18:25.028 }, 00:18:25.028 { 00:18:25.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.028 "dma_device_type": 2 00:18:25.028 } 00:18:25.028 ], 00:18:25.028 "driver_specific": {} 00:18:25.028 }' 00:18:25.028 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.028 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:25.350 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:25.350 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.350 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:25.350 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:25.350 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.350 11:29:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:25.350 11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:25.350 11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.631 11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:25.631 11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:25.631 11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:25.631 [2024-07-13 11:30:00.358642] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:25.631 [2024-07-13 11:30:00.358671] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.631 [2024-07-13 11:30:00.358741] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.631 [2024-07-13 11:30:00.358802] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.631 [2024-07-13 11:30:00.358814] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:18:25.631 11:30:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 126250 00:18:25.631 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 126250 ']' 00:18:25.631 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 126250 00:18:25.631 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:18:25.631 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.631 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126250 00:18:25.890 killing process with pid 126250 00:18:25.890 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.890 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.890 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126250' 00:18:25.890 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 126250 00:18:25.890 11:30:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 126250 00:18:25.890 [2024-07-13 11:30:00.393046] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.890 [2024-07-13 11:30:00.636254] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.824 ************************************ 00:18:26.824 END TEST raid_state_function_test_sb 00:18:26.824 ************************************ 00:18:26.824 11:30:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:26.824 00:18:26.824 real 0m30.048s 00:18:26.824 user 0m56.681s 00:18:26.824 sys 0m3.182s 00:18:26.824 11:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:26.824 11:30:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.082 11:30:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:27.082 11:30:01 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:18:27.082 11:30:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:27.082 11:30:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:27.082 11:30:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:27.082 ************************************ 00:18:27.082 START TEST raid_superblock_test 00:18:27.082 ************************************ 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=127283 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 127283 /var/tmp/spdk-raid.sock 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 127283 ']' 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:27.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.082 11:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.082 [2024-07-13 11:30:01.681564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:27.082 [2024-07-13 11:30:01.681785] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127283 ] 00:18:27.340 [2024-07-13 11:30:01.853907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.340 [2024-07-13 11:30:02.068852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.599 [2024-07-13 11:30:02.233169] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:28.166 malloc1 00:18:28.166 11:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.424 [2024-07-13 11:30:03.062133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.424 [2024-07-13 11:30:03.062242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.424 [2024-07-13 11:30:03.062283] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:28.424 [2024-07-13 11:30:03.062306] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.424 [2024-07-13 11:30:03.064291] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.424 [2024-07-13 11:30:03.064344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.424 pt1 00:18:28.424 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:28.424 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:28.424 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:28.424 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:28.424 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:28.424 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.424 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.424 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.424 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:28.682 malloc2 00:18:28.682 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.939 [2024-07-13 11:30:03.504050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.939 [2024-07-13 11:30:03.504160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.939 [2024-07-13 11:30:03.504199] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:18:28.939 [2024-07-13 11:30:03.504222] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.939 [2024-07-13 11:30:03.506385] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.939 [2024-07-13 11:30:03.506438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.939 pt2 00:18:28.939 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:28.939 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:28.939 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:28.939 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:28.939 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:28.939 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:28.939 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:28.939 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:28.939 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:29.196 malloc3 00:18:29.196 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:29.196 [2024-07-13 11:30:03.916792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:29.196 [2024-07-13 11:30:03.916888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.196 [2024-07-13 11:30:03.916927] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:18:29.196 [2024-07-13 11:30:03.916957] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.196 [2024-07-13 11:30:03.919174] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.196 [2024-07-13 11:30:03.919255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:29.196 pt3 00:18:29.196 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:29.196 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:29.196 11:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:29.454 [2024-07-13 11:30:04.096849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:29.454 [2024-07-13 11:30:04.098480] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.454 [2024-07-13 11:30:04.098559] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:29.454 [2024-07-13 11:30:04.098819] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:18:29.454 [2024-07-13 11:30:04.098863] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:29.454 [2024-07-13 11:30:04.098995] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:29.454 [2024-07-13 11:30:04.099390] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:18:29.454 [2024-07-13 11:30:04.099416] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:18:29.454 [2024-07-13 11:30:04.099571] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.454 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:29.454 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:29.454 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.454 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:29.454 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:29.455 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:29.455 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.455 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.455 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.455 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.455 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.455 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.713 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.713 "name": "raid_bdev1", 00:18:29.713 "uuid": "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2", 00:18:29.713 "strip_size_kb": 64, 00:18:29.713 "state": "online", 00:18:29.713 "raid_level": "raid0", 00:18:29.713 "superblock": true, 00:18:29.713 "num_base_bdevs": 3, 00:18:29.713 "num_base_bdevs_discovered": 3, 00:18:29.713 "num_base_bdevs_operational": 3, 00:18:29.713 "base_bdevs_list": [ 00:18:29.713 { 00:18:29.713 "name": "pt1", 00:18:29.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:29.713 "is_configured": true, 00:18:29.713 "data_offset": 2048, 00:18:29.713 "data_size": 63488 00:18:29.713 }, 00:18:29.713 { 00:18:29.713 "name": "pt2", 00:18:29.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.713 "is_configured": true, 00:18:29.713 "data_offset": 2048, 00:18:29.713 "data_size": 63488 00:18:29.713 }, 00:18:29.713 { 00:18:29.713 "name": "pt3", 00:18:29.713 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:29.713 "is_configured": true, 00:18:29.713 "data_offset": 2048, 00:18:29.713 "data_size": 63488 00:18:29.713 } 00:18:29.713 ] 00:18:29.713 }' 00:18:29.713 11:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.713 11:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.280 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:30.280 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:30.280 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:30.280 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:30.280 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:30.280 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:30.280 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:30.280 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:30.539 [2024-07-13 11:30:05.265167] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:30.539 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:30.539 "name": "raid_bdev1", 00:18:30.539 "aliases": [ 00:18:30.539 "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2" 00:18:30.539 ], 00:18:30.539 "product_name": "Raid Volume", 00:18:30.539 "block_size": 512, 00:18:30.539 "num_blocks": 190464, 00:18:30.539 "uuid": "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2", 00:18:30.539 "assigned_rate_limits": { 00:18:30.539 "rw_ios_per_sec": 0, 00:18:30.539 "rw_mbytes_per_sec": 0, 00:18:30.539 "r_mbytes_per_sec": 0, 00:18:30.539 "w_mbytes_per_sec": 0 00:18:30.539 }, 00:18:30.539 "claimed": false, 00:18:30.539 "zoned": false, 00:18:30.539 "supported_io_types": { 00:18:30.539 "read": true, 00:18:30.539 "write": true, 00:18:30.539 "unmap": true, 00:18:30.539 "flush": true, 00:18:30.539 "reset": true, 00:18:30.539 "nvme_admin": false, 00:18:30.539 "nvme_io": false, 00:18:30.539 "nvme_io_md": false, 00:18:30.539 "write_zeroes": true, 00:18:30.539 "zcopy": false, 00:18:30.539 "get_zone_info": false, 00:18:30.539 "zone_management": false, 00:18:30.539 "zone_append": false, 00:18:30.539 "compare": false, 00:18:30.539 "compare_and_write": false, 00:18:30.539 "abort": false, 00:18:30.539 "seek_hole": false, 00:18:30.539 "seek_data": false, 00:18:30.539 "copy": false, 00:18:30.539 "nvme_iov_md": false 00:18:30.539 }, 00:18:30.539 "memory_domains": [ 00:18:30.539 { 00:18:30.539 "dma_device_id": "system", 00:18:30.539 "dma_device_type": 1 00:18:30.539 }, 00:18:30.539 { 00:18:30.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.539 "dma_device_type": 2 00:18:30.539 }, 00:18:30.539 { 00:18:30.539 "dma_device_id": "system", 00:18:30.539 "dma_device_type": 1 00:18:30.539 }, 00:18:30.539 { 00:18:30.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.539 "dma_device_type": 2 00:18:30.539 }, 00:18:30.539 { 00:18:30.539 "dma_device_id": "system", 00:18:30.539 "dma_device_type": 1 00:18:30.539 }, 00:18:30.539 { 00:18:30.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.539 "dma_device_type": 2 00:18:30.539 } 00:18:30.539 ], 00:18:30.539 "driver_specific": { 00:18:30.539 "raid": { 00:18:30.539 "uuid": "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2", 00:18:30.539 "strip_size_kb": 64, 00:18:30.539 "state": "online", 00:18:30.539 "raid_level": "raid0", 00:18:30.539 "superblock": true, 00:18:30.539 "num_base_bdevs": 3, 00:18:30.539 "num_base_bdevs_discovered": 3, 00:18:30.539 "num_base_bdevs_operational": 3, 00:18:30.539 "base_bdevs_list": [ 00:18:30.539 { 00:18:30.539 "name": "pt1", 00:18:30.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:30.539 "is_configured": true, 00:18:30.539 "data_offset": 2048, 00:18:30.539 "data_size": 63488 00:18:30.539 }, 00:18:30.539 { 00:18:30.539 "name": "pt2", 00:18:30.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:30.539 "is_configured": true, 00:18:30.539 "data_offset": 2048, 00:18:30.539 "data_size": 63488 00:18:30.539 }, 00:18:30.539 { 00:18:30.539 "name": "pt3", 00:18:30.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:30.539 "is_configured": true, 00:18:30.539 "data_offset": 2048, 00:18:30.539 "data_size": 63488 00:18:30.539 } 00:18:30.539 ] 00:18:30.539 } 00:18:30.539 } 00:18:30.539 }' 00:18:30.539 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:30.797 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:30.797 pt2 00:18:30.797 pt3' 00:18:30.797 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:30.797 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:30.797 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:31.056 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.056 "name": "pt1", 00:18:31.056 "aliases": [ 00:18:31.056 "00000000-0000-0000-0000-000000000001" 00:18:31.056 ], 00:18:31.056 "product_name": "passthru", 00:18:31.056 "block_size": 512, 00:18:31.056 "num_blocks": 65536, 00:18:31.056 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:31.056 "assigned_rate_limits": { 00:18:31.056 "rw_ios_per_sec": 0, 00:18:31.056 "rw_mbytes_per_sec": 0, 00:18:31.056 "r_mbytes_per_sec": 0, 00:18:31.056 "w_mbytes_per_sec": 0 00:18:31.056 }, 00:18:31.056 "claimed": true, 00:18:31.056 "claim_type": "exclusive_write", 00:18:31.056 "zoned": false, 00:18:31.056 "supported_io_types": { 00:18:31.056 "read": true, 00:18:31.056 "write": true, 00:18:31.056 "unmap": true, 00:18:31.056 "flush": true, 00:18:31.056 "reset": true, 00:18:31.056 "nvme_admin": false, 00:18:31.056 "nvme_io": false, 00:18:31.056 "nvme_io_md": false, 00:18:31.056 "write_zeroes": true, 00:18:31.056 "zcopy": true, 00:18:31.056 "get_zone_info": false, 00:18:31.056 "zone_management": false, 00:18:31.056 "zone_append": false, 00:18:31.056 "compare": false, 00:18:31.056 "compare_and_write": false, 00:18:31.056 "abort": true, 00:18:31.056 "seek_hole": false, 00:18:31.056 "seek_data": false, 00:18:31.056 "copy": true, 00:18:31.056 "nvme_iov_md": false 00:18:31.056 }, 00:18:31.056 "memory_domains": [ 00:18:31.056 { 00:18:31.056 "dma_device_id": "system", 00:18:31.056 "dma_device_type": 1 00:18:31.056 }, 00:18:31.056 { 00:18:31.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.056 "dma_device_type": 2 00:18:31.056 } 00:18:31.056 ], 00:18:31.056 "driver_specific": { 00:18:31.056 "passthru": { 00:18:31.056 "name": "pt1", 00:18:31.056 "base_bdev_name": "malloc1" 00:18:31.056 } 00:18:31.056 } 00:18:31.056 }' 00:18:31.056 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.056 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.056 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.056 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.056 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.056 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:31.056 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.315 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.315 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:31.315 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.315 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.315 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:31.315 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:31.315 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:31.315 11:30:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:31.573 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.573 "name": "pt2", 00:18:31.573 "aliases": [ 00:18:31.573 "00000000-0000-0000-0000-000000000002" 00:18:31.573 ], 00:18:31.573 "product_name": "passthru", 00:18:31.573 "block_size": 512, 00:18:31.573 "num_blocks": 65536, 00:18:31.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:31.573 "assigned_rate_limits": { 00:18:31.573 "rw_ios_per_sec": 0, 00:18:31.573 "rw_mbytes_per_sec": 0, 00:18:31.573 "r_mbytes_per_sec": 0, 00:18:31.573 "w_mbytes_per_sec": 0 00:18:31.573 }, 00:18:31.573 "claimed": true, 00:18:31.573 "claim_type": "exclusive_write", 00:18:31.573 "zoned": false, 00:18:31.573 "supported_io_types": { 00:18:31.573 "read": true, 00:18:31.573 "write": true, 00:18:31.573 "unmap": true, 00:18:31.573 "flush": true, 00:18:31.573 "reset": true, 00:18:31.573 "nvme_admin": false, 00:18:31.573 "nvme_io": false, 00:18:31.573 "nvme_io_md": false, 00:18:31.573 "write_zeroes": true, 00:18:31.573 "zcopy": true, 00:18:31.573 "get_zone_info": false, 00:18:31.573 "zone_management": false, 00:18:31.573 "zone_append": false, 00:18:31.573 "compare": false, 00:18:31.574 "compare_and_write": false, 00:18:31.574 "abort": true, 00:18:31.574 "seek_hole": false, 00:18:31.574 "seek_data": false, 00:18:31.574 "copy": true, 00:18:31.574 "nvme_iov_md": false 00:18:31.574 }, 00:18:31.574 "memory_domains": [ 00:18:31.574 { 00:18:31.574 "dma_device_id": "system", 00:18:31.574 "dma_device_type": 1 00:18:31.574 }, 00:18:31.574 { 00:18:31.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.574 "dma_device_type": 2 00:18:31.574 } 00:18:31.574 ], 00:18:31.574 "driver_specific": { 00:18:31.574 "passthru": { 00:18:31.574 "name": "pt2", 00:18:31.574 "base_bdev_name": "malloc2" 00:18:31.574 } 00:18:31.574 } 00:18:31.574 }' 00:18:31.574 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.574 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.574 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.574 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.574 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.833 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:31.833 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.833 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.833 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:31.833 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.833 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.092 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.092 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:32.092 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:32.092 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:32.350 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:32.350 "name": "pt3", 00:18:32.350 "aliases": [ 00:18:32.350 "00000000-0000-0000-0000-000000000003" 00:18:32.350 ], 00:18:32.350 "product_name": "passthru", 00:18:32.350 "block_size": 512, 00:18:32.350 "num_blocks": 65536, 00:18:32.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:32.350 "assigned_rate_limits": { 00:18:32.350 "rw_ios_per_sec": 0, 00:18:32.350 "rw_mbytes_per_sec": 0, 00:18:32.350 "r_mbytes_per_sec": 0, 00:18:32.350 "w_mbytes_per_sec": 0 00:18:32.350 }, 00:18:32.350 "claimed": true, 00:18:32.350 "claim_type": "exclusive_write", 00:18:32.350 "zoned": false, 00:18:32.350 "supported_io_types": { 00:18:32.350 "read": true, 00:18:32.350 "write": true, 00:18:32.350 "unmap": true, 00:18:32.350 "flush": true, 00:18:32.350 "reset": true, 00:18:32.350 "nvme_admin": false, 00:18:32.350 "nvme_io": false, 00:18:32.350 "nvme_io_md": false, 00:18:32.350 "write_zeroes": true, 00:18:32.350 "zcopy": true, 00:18:32.350 "get_zone_info": false, 00:18:32.350 "zone_management": false, 00:18:32.350 "zone_append": false, 00:18:32.350 "compare": false, 00:18:32.350 "compare_and_write": false, 00:18:32.350 "abort": true, 00:18:32.350 "seek_hole": false, 00:18:32.350 "seek_data": false, 00:18:32.350 "copy": true, 00:18:32.350 "nvme_iov_md": false 00:18:32.350 }, 00:18:32.350 "memory_domains": [ 00:18:32.350 { 00:18:32.350 "dma_device_id": "system", 00:18:32.350 "dma_device_type": 1 00:18:32.350 }, 00:18:32.350 { 00:18:32.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.350 "dma_device_type": 2 00:18:32.350 } 00:18:32.350 ], 00:18:32.350 "driver_specific": { 00:18:32.350 "passthru": { 00:18:32.350 "name": "pt3", 00:18:32.350 "base_bdev_name": "malloc3" 00:18:32.350 } 00:18:32.350 } 00:18:32.350 }' 00:18:32.350 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.350 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.350 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:32.350 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.350 11:30:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.350 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:32.350 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.609 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.609 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:32.609 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.609 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.609 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.609 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:32.609 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:32.867 [2024-07-13 11:30:07.513491] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.867 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=79794ef7-9f57-4f48-8b0b-b69ecbdce9a2 00:18:32.867 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 79794ef7-9f57-4f48-8b0b-b69ecbdce9a2 ']' 00:18:32.867 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:33.126 [2024-07-13 11:30:07.709305] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:33.126 [2024-07-13 11:30:07.709335] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.126 [2024-07-13 11:30:07.709407] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.126 [2024-07-13 11:30:07.709468] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.126 [2024-07-13 11:30:07.709482] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:18:33.126 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.126 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:33.385 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:33.385 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:33.385 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.385 11:30:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:33.643 11:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.643 11:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:33.643 11:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:33.643 11:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:33.901 11:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:33.901 11:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:34.160 11:30:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:34.419 [2024-07-13 11:30:08.997470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:34.419 [2024-07-13 11:30:08.999376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:34.419 [2024-07-13 11:30:08.999454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:34.419 [2024-07-13 11:30:08.999515] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:34.419 [2024-07-13 11:30:08.999619] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:34.419 [2024-07-13 11:30:08.999683] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:34.419 [2024-07-13 11:30:08.999723] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.419 [2024-07-13 11:30:08.999735] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:18:34.419 request: 00:18:34.419 { 00:18:34.419 "name": "raid_bdev1", 00:18:34.419 "raid_level": "raid0", 00:18:34.419 "base_bdevs": [ 00:18:34.419 "malloc1", 00:18:34.419 "malloc2", 00:18:34.419 "malloc3" 00:18:34.419 ], 00:18:34.419 "strip_size_kb": 64, 00:18:34.419 "superblock": false, 00:18:34.419 "method": "bdev_raid_create", 00:18:34.419 "req_id": 1 00:18:34.419 } 00:18:34.419 Got JSON-RPC error response 00:18:34.419 response: 00:18:34.419 { 00:18:34.419 "code": -17, 00:18:34.419 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:34.419 } 00:18:34.419 11:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:18:34.419 11:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:34.419 11:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:34.419 11:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:34.419 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.419 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:34.678 [2024-07-13 11:30:09.377463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:34.678 [2024-07-13 11:30:09.377526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.678 [2024-07-13 11:30:09.377564] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:34.678 [2024-07-13 11:30:09.377587] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.678 [2024-07-13 11:30:09.379560] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.678 [2024-07-13 11:30:09.379613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:34.678 [2024-07-13 11:30:09.379712] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:34.678 [2024-07-13 11:30:09.379765] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:34.678 pt1 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.678 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.937 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.937 "name": "raid_bdev1", 00:18:34.937 "uuid": "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2", 00:18:34.937 "strip_size_kb": 64, 00:18:34.937 "state": "configuring", 00:18:34.937 "raid_level": "raid0", 00:18:34.937 "superblock": true, 00:18:34.937 "num_base_bdevs": 3, 00:18:34.937 "num_base_bdevs_discovered": 1, 00:18:34.937 "num_base_bdevs_operational": 3, 00:18:34.937 "base_bdevs_list": [ 00:18:34.937 { 00:18:34.937 "name": "pt1", 00:18:34.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:34.937 "is_configured": true, 00:18:34.937 "data_offset": 2048, 00:18:34.937 "data_size": 63488 00:18:34.937 }, 00:18:34.937 { 00:18:34.937 "name": null, 00:18:34.937 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:34.937 "is_configured": false, 00:18:34.937 "data_offset": 2048, 00:18:34.937 "data_size": 63488 00:18:34.937 }, 00:18:34.937 { 00:18:34.937 "name": null, 00:18:34.937 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:34.937 "is_configured": false, 00:18:34.937 "data_offset": 2048, 00:18:34.937 "data_size": 63488 00:18:34.937 } 00:18:34.937 ] 00:18:34.937 }' 00:18:34.937 11:30:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.937 11:30:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.872 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:18:35.872 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.872 [2024-07-13 11:30:10.453615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.872 [2024-07-13 11:30:10.453677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.872 [2024-07-13 11:30:10.453715] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:35.872 [2024-07-13 11:30:10.453736] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.872 [2024-07-13 11:30:10.454179] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.872 [2024-07-13 11:30:10.454235] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.872 [2024-07-13 11:30:10.454359] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:35.872 [2024-07-13 11:30:10.454397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:35.872 pt2 00:18:35.872 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:36.130 [2024-07-13 11:30:10.653670] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.130 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.389 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.389 "name": "raid_bdev1", 00:18:36.389 "uuid": "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2", 00:18:36.389 "strip_size_kb": 64, 00:18:36.389 "state": "configuring", 00:18:36.389 "raid_level": "raid0", 00:18:36.389 "superblock": true, 00:18:36.389 "num_base_bdevs": 3, 00:18:36.389 "num_base_bdevs_discovered": 1, 00:18:36.389 "num_base_bdevs_operational": 3, 00:18:36.389 "base_bdevs_list": [ 00:18:36.389 { 00:18:36.389 "name": "pt1", 00:18:36.389 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.389 "is_configured": true, 00:18:36.389 "data_offset": 2048, 00:18:36.389 "data_size": 63488 00:18:36.389 }, 00:18:36.389 { 00:18:36.389 "name": null, 00:18:36.389 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.389 "is_configured": false, 00:18:36.389 "data_offset": 2048, 00:18:36.389 "data_size": 63488 00:18:36.389 }, 00:18:36.389 { 00:18:36.389 "name": null, 00:18:36.389 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:36.389 "is_configured": false, 00:18:36.389 "data_offset": 2048, 00:18:36.389 "data_size": 63488 00:18:36.389 } 00:18:36.389 ] 00:18:36.389 }' 00:18:36.389 11:30:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.389 11:30:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.956 11:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:36.956 11:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:36.956 11:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:37.214 [2024-07-13 11:30:11.861837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:37.214 [2024-07-13 11:30:11.861914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.214 [2024-07-13 11:30:11.861947] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:18:37.214 [2024-07-13 11:30:11.861974] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.214 [2024-07-13 11:30:11.862424] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.214 [2024-07-13 11:30:11.862480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:37.214 [2024-07-13 11:30:11.862614] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:37.214 [2024-07-13 11:30:11.862643] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.214 pt2 00:18:37.214 11:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:37.214 11:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:37.214 11:30:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:37.473 [2024-07-13 11:30:12.041852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:37.473 [2024-07-13 11:30:12.041917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.473 [2024-07-13 11:30:12.041948] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:37.473 [2024-07-13 11:30:12.041977] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.473 [2024-07-13 11:30:12.042429] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.473 [2024-07-13 11:30:12.042478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:37.473 [2024-07-13 11:30:12.042585] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:37.473 [2024-07-13 11:30:12.042615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:37.473 [2024-07-13 11:30:12.042734] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:18:37.473 [2024-07-13 11:30:12.042760] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:37.473 [2024-07-13 11:30:12.042872] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:37.473 [2024-07-13 11:30:12.043186] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:18:37.473 [2024-07-13 11:30:12.043214] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:18:37.473 [2024-07-13 11:30:12.043351] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.473 pt3 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.473 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.731 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.731 "name": "raid_bdev1", 00:18:37.731 "uuid": "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2", 00:18:37.731 "strip_size_kb": 64, 00:18:37.731 "state": "online", 00:18:37.731 "raid_level": "raid0", 00:18:37.731 "superblock": true, 00:18:37.731 "num_base_bdevs": 3, 00:18:37.731 "num_base_bdevs_discovered": 3, 00:18:37.731 "num_base_bdevs_operational": 3, 00:18:37.731 "base_bdevs_list": [ 00:18:37.731 { 00:18:37.731 "name": "pt1", 00:18:37.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.731 "is_configured": true, 00:18:37.731 "data_offset": 2048, 00:18:37.731 "data_size": 63488 00:18:37.731 }, 00:18:37.731 { 00:18:37.731 "name": "pt2", 00:18:37.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.732 "is_configured": true, 00:18:37.732 "data_offset": 2048, 00:18:37.732 "data_size": 63488 00:18:37.732 }, 00:18:37.732 { 00:18:37.732 "name": "pt3", 00:18:37.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:37.732 "is_configured": true, 00:18:37.732 "data_offset": 2048, 00:18:37.732 "data_size": 63488 00:18:37.732 } 00:18:37.732 ] 00:18:37.732 }' 00:18:37.732 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.732 11:30:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.297 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:38.297 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:38.297 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:38.297 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:38.297 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:38.297 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:38.297 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:38.297 11:30:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:38.554 [2024-07-13 11:30:13.162237] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.554 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:38.554 "name": "raid_bdev1", 00:18:38.554 "aliases": [ 00:18:38.554 "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2" 00:18:38.554 ], 00:18:38.554 "product_name": "Raid Volume", 00:18:38.554 "block_size": 512, 00:18:38.554 "num_blocks": 190464, 00:18:38.554 "uuid": "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2", 00:18:38.554 "assigned_rate_limits": { 00:18:38.554 "rw_ios_per_sec": 0, 00:18:38.554 "rw_mbytes_per_sec": 0, 00:18:38.554 "r_mbytes_per_sec": 0, 00:18:38.554 "w_mbytes_per_sec": 0 00:18:38.554 }, 00:18:38.554 "claimed": false, 00:18:38.554 "zoned": false, 00:18:38.554 "supported_io_types": { 00:18:38.554 "read": true, 00:18:38.554 "write": true, 00:18:38.554 "unmap": true, 00:18:38.554 "flush": true, 00:18:38.554 "reset": true, 00:18:38.554 "nvme_admin": false, 00:18:38.554 "nvme_io": false, 00:18:38.554 "nvme_io_md": false, 00:18:38.554 "write_zeroes": true, 00:18:38.554 "zcopy": false, 00:18:38.554 "get_zone_info": false, 00:18:38.554 "zone_management": false, 00:18:38.554 "zone_append": false, 00:18:38.554 "compare": false, 00:18:38.554 "compare_and_write": false, 00:18:38.554 "abort": false, 00:18:38.554 "seek_hole": false, 00:18:38.554 "seek_data": false, 00:18:38.554 "copy": false, 00:18:38.554 "nvme_iov_md": false 00:18:38.554 }, 00:18:38.554 "memory_domains": [ 00:18:38.554 { 00:18:38.554 "dma_device_id": "system", 00:18:38.554 "dma_device_type": 1 00:18:38.554 }, 00:18:38.554 { 00:18:38.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.554 "dma_device_type": 2 00:18:38.554 }, 00:18:38.554 { 00:18:38.554 "dma_device_id": "system", 00:18:38.554 "dma_device_type": 1 00:18:38.554 }, 00:18:38.554 { 00:18:38.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.554 "dma_device_type": 2 00:18:38.554 }, 00:18:38.554 { 00:18:38.554 "dma_device_id": "system", 00:18:38.554 "dma_device_type": 1 00:18:38.554 }, 00:18:38.554 { 00:18:38.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.554 "dma_device_type": 2 00:18:38.554 } 00:18:38.554 ], 00:18:38.554 "driver_specific": { 00:18:38.554 "raid": { 00:18:38.554 "uuid": "79794ef7-9f57-4f48-8b0b-b69ecbdce9a2", 00:18:38.554 "strip_size_kb": 64, 00:18:38.554 "state": "online", 00:18:38.554 "raid_level": "raid0", 00:18:38.554 "superblock": true, 00:18:38.554 "num_base_bdevs": 3, 00:18:38.554 "num_base_bdevs_discovered": 3, 00:18:38.554 "num_base_bdevs_operational": 3, 00:18:38.554 "base_bdevs_list": [ 00:18:38.554 { 00:18:38.554 "name": "pt1", 00:18:38.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.554 "is_configured": true, 00:18:38.554 "data_offset": 2048, 00:18:38.554 "data_size": 63488 00:18:38.554 }, 00:18:38.554 { 00:18:38.554 "name": "pt2", 00:18:38.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.554 "is_configured": true, 00:18:38.554 "data_offset": 2048, 00:18:38.554 "data_size": 63488 00:18:38.554 }, 00:18:38.554 { 00:18:38.554 "name": "pt3", 00:18:38.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:38.554 "is_configured": true, 00:18:38.554 "data_offset": 2048, 00:18:38.554 "data_size": 63488 00:18:38.555 } 00:18:38.555 ] 00:18:38.555 } 00:18:38.555 } 00:18:38.555 }' 00:18:38.555 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:38.555 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:38.555 pt2 00:18:38.555 pt3' 00:18:38.555 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:38.555 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:38.555 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:38.812 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:38.812 "name": "pt1", 00:18:38.812 "aliases": [ 00:18:38.812 "00000000-0000-0000-0000-000000000001" 00:18:38.812 ], 00:18:38.812 "product_name": "passthru", 00:18:38.812 "block_size": 512, 00:18:38.812 "num_blocks": 65536, 00:18:38.812 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.812 "assigned_rate_limits": { 00:18:38.812 "rw_ios_per_sec": 0, 00:18:38.812 "rw_mbytes_per_sec": 0, 00:18:38.812 "r_mbytes_per_sec": 0, 00:18:38.813 "w_mbytes_per_sec": 0 00:18:38.813 }, 00:18:38.813 "claimed": true, 00:18:38.813 "claim_type": "exclusive_write", 00:18:38.813 "zoned": false, 00:18:38.813 "supported_io_types": { 00:18:38.813 "read": true, 00:18:38.813 "write": true, 00:18:38.813 "unmap": true, 00:18:38.813 "flush": true, 00:18:38.813 "reset": true, 00:18:38.813 "nvme_admin": false, 00:18:38.813 "nvme_io": false, 00:18:38.813 "nvme_io_md": false, 00:18:38.813 "write_zeroes": true, 00:18:38.813 "zcopy": true, 00:18:38.813 "get_zone_info": false, 00:18:38.813 "zone_management": false, 00:18:38.813 "zone_append": false, 00:18:38.813 "compare": false, 00:18:38.813 "compare_and_write": false, 00:18:38.813 "abort": true, 00:18:38.813 "seek_hole": false, 00:18:38.813 "seek_data": false, 00:18:38.813 "copy": true, 00:18:38.813 "nvme_iov_md": false 00:18:38.813 }, 00:18:38.813 "memory_domains": [ 00:18:38.813 { 00:18:38.813 "dma_device_id": "system", 00:18:38.813 "dma_device_type": 1 00:18:38.813 }, 00:18:38.813 { 00:18:38.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.813 "dma_device_type": 2 00:18:38.813 } 00:18:38.813 ], 00:18:38.813 "driver_specific": { 00:18:38.813 "passthru": { 00:18:38.813 "name": "pt1", 00:18:38.813 "base_bdev_name": "malloc1" 00:18:38.813 } 00:18:38.813 } 00:18:38.813 }' 00:18:38.813 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.813 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:39.071 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:39.071 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.071 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.071 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:39.071 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.071 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.329 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:39.329 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:39.329 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:39.329 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:39.329 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:39.329 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:39.329 11:30:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:39.588 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:39.588 "name": "pt2", 00:18:39.588 "aliases": [ 00:18:39.588 "00000000-0000-0000-0000-000000000002" 00:18:39.588 ], 00:18:39.588 "product_name": "passthru", 00:18:39.588 "block_size": 512, 00:18:39.588 "num_blocks": 65536, 00:18:39.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.588 "assigned_rate_limits": { 00:18:39.588 "rw_ios_per_sec": 0, 00:18:39.588 "rw_mbytes_per_sec": 0, 00:18:39.588 "r_mbytes_per_sec": 0, 00:18:39.588 "w_mbytes_per_sec": 0 00:18:39.588 }, 00:18:39.588 "claimed": true, 00:18:39.588 "claim_type": "exclusive_write", 00:18:39.588 "zoned": false, 00:18:39.588 "supported_io_types": { 00:18:39.588 "read": true, 00:18:39.588 "write": true, 00:18:39.588 "unmap": true, 00:18:39.588 "flush": true, 00:18:39.588 "reset": true, 00:18:39.588 "nvme_admin": false, 00:18:39.588 "nvme_io": false, 00:18:39.588 "nvme_io_md": false, 00:18:39.588 "write_zeroes": true, 00:18:39.588 "zcopy": true, 00:18:39.588 "get_zone_info": false, 00:18:39.588 "zone_management": false, 00:18:39.588 "zone_append": false, 00:18:39.588 "compare": false, 00:18:39.588 "compare_and_write": false, 00:18:39.588 "abort": true, 00:18:39.588 "seek_hole": false, 00:18:39.588 "seek_data": false, 00:18:39.588 "copy": true, 00:18:39.588 "nvme_iov_md": false 00:18:39.588 }, 00:18:39.588 "memory_domains": [ 00:18:39.588 { 00:18:39.588 "dma_device_id": "system", 00:18:39.588 "dma_device_type": 1 00:18:39.588 }, 00:18:39.588 { 00:18:39.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.588 "dma_device_type": 2 00:18:39.588 } 00:18:39.588 ], 00:18:39.588 "driver_specific": { 00:18:39.588 "passthru": { 00:18:39.588 "name": "pt2", 00:18:39.588 "base_bdev_name": "malloc2" 00:18:39.588 } 00:18:39.588 } 00:18:39.588 }' 00:18:39.588 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:39.588 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:39.588 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:39.588 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.846 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.846 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:39.846 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.846 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.846 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:39.846 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:39.846 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.104 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:40.104 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:40.104 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:40.104 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:40.362 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:40.362 "name": "pt3", 00:18:40.362 "aliases": [ 00:18:40.362 "00000000-0000-0000-0000-000000000003" 00:18:40.362 ], 00:18:40.362 "product_name": "passthru", 00:18:40.362 "block_size": 512, 00:18:40.362 "num_blocks": 65536, 00:18:40.362 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.362 "assigned_rate_limits": { 00:18:40.362 "rw_ios_per_sec": 0, 00:18:40.362 "rw_mbytes_per_sec": 0, 00:18:40.362 "r_mbytes_per_sec": 0, 00:18:40.362 "w_mbytes_per_sec": 0 00:18:40.362 }, 00:18:40.362 "claimed": true, 00:18:40.362 "claim_type": "exclusive_write", 00:18:40.362 "zoned": false, 00:18:40.362 "supported_io_types": { 00:18:40.362 "read": true, 00:18:40.362 "write": true, 00:18:40.362 "unmap": true, 00:18:40.362 "flush": true, 00:18:40.362 "reset": true, 00:18:40.362 "nvme_admin": false, 00:18:40.362 "nvme_io": false, 00:18:40.362 "nvme_io_md": false, 00:18:40.362 "write_zeroes": true, 00:18:40.362 "zcopy": true, 00:18:40.362 "get_zone_info": false, 00:18:40.362 "zone_management": false, 00:18:40.362 "zone_append": false, 00:18:40.362 "compare": false, 00:18:40.362 "compare_and_write": false, 00:18:40.362 "abort": true, 00:18:40.362 "seek_hole": false, 00:18:40.362 "seek_data": false, 00:18:40.362 "copy": true, 00:18:40.362 "nvme_iov_md": false 00:18:40.362 }, 00:18:40.362 "memory_domains": [ 00:18:40.362 { 00:18:40.362 "dma_device_id": "system", 00:18:40.362 "dma_device_type": 1 00:18:40.362 }, 00:18:40.362 { 00:18:40.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.362 "dma_device_type": 2 00:18:40.362 } 00:18:40.362 ], 00:18:40.362 "driver_specific": { 00:18:40.362 "passthru": { 00:18:40.362 "name": "pt3", 00:18:40.362 "base_bdev_name": "malloc3" 00:18:40.362 } 00:18:40.362 } 00:18:40.362 }' 00:18:40.362 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.362 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:40.362 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:40.362 11:30:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.362 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:40.362 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:40.362 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.620 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.620 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:40.620 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.620 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.620 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:40.620 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:40.620 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:40.896 [2024-07-13 11:30:15.590668] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 79794ef7-9f57-4f48-8b0b-b69ecbdce9a2 '!=' 79794ef7-9f57-4f48-8b0b-b69ecbdce9a2 ']' 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 127283 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 127283 ']' 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 127283 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127283 00:18:40.896 killing process with pid 127283 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127283' 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 127283 00:18:40.896 11:30:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 127283 00:18:40.896 [2024-07-13 11:30:15.623812] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:40.896 [2024-07-13 11:30:15.623879] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.896 [2024-07-13 11:30:15.623934] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.896 [2024-07-13 11:30:15.623946] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:18:41.154 [2024-07-13 11:30:15.812156] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.087 ************************************ 00:18:42.087 END TEST raid_superblock_test 00:18:42.087 ************************************ 00:18:42.087 11:30:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:18:42.087 00:18:42.087 real 0m15.113s 00:18:42.087 user 0m27.640s 00:18:42.087 sys 0m1.689s 00:18:42.087 11:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.087 11:30:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.087 11:30:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:42.087 11:30:16 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:18:42.087 11:30:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:42.087 11:30:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.087 11:30:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.087 ************************************ 00:18:42.087 START TEST raid_read_error_test 00:18:42.087 ************************************ 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:42.087 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8gMa6v1gWV 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=127804 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 127804 /var/tmp/spdk-raid.sock 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 127804 ']' 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:42.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.088 11:30:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.345 [2024-07-13 11:30:16.861339] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:42.345 [2024-07-13 11:30:16.861529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127804 ] 00:18:42.345 [2024-07-13 11:30:17.031334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.603 [2024-07-13 11:30:17.191232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.861 [2024-07-13 11:30:17.359115] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.119 11:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.120 11:30:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:43.120 11:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:43.120 11:30:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:43.379 BaseBdev1_malloc 00:18:43.379 11:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:43.638 true 00:18:43.638 11:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:43.896 [2024-07-13 11:30:18.472983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:43.896 [2024-07-13 11:30:18.473074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.896 [2024-07-13 11:30:18.473110] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:43.896 [2024-07-13 11:30:18.473131] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.896 [2024-07-13 11:30:18.475262] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.896 [2024-07-13 11:30:18.475310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:43.896 BaseBdev1 00:18:43.896 11:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:43.896 11:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:44.155 BaseBdev2_malloc 00:18:44.155 11:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:44.155 true 00:18:44.414 11:30:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:44.414 [2024-07-13 11:30:19.103414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:44.414 [2024-07-13 11:30:19.103514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.414 [2024-07-13 11:30:19.103556] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:44.414 [2024-07-13 11:30:19.103581] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.414 [2024-07-13 11:30:19.105433] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.414 [2024-07-13 11:30:19.105479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:44.414 BaseBdev2 00:18:44.414 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:44.414 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:44.672 BaseBdev3_malloc 00:18:44.672 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:44.931 true 00:18:44.931 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:45.190 [2024-07-13 11:30:19.712393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:45.190 [2024-07-13 11:30:19.712478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.190 [2024-07-13 11:30:19.712513] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:45.190 [2024-07-13 11:30:19.712540] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.190 [2024-07-13 11:30:19.714635] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.190 [2024-07-13 11:30:19.714691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:45.190 BaseBdev3 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:45.190 [2024-07-13 11:30:19.896469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.190 [2024-07-13 11:30:19.898028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.190 [2024-07-13 11:30:19.898118] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.190 [2024-07-13 11:30:19.898361] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:45.190 [2024-07-13 11:30:19.898387] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:45.190 [2024-07-13 11:30:19.898552] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:45.190 [2024-07-13 11:30:19.898944] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:45.190 [2024-07-13 11:30:19.898968] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:45.190 [2024-07-13 11:30:19.899111] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.190 11:30:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.448 11:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.448 "name": "raid_bdev1", 00:18:45.448 "uuid": "dfc24ceb-4c4d-48c4-a50e-74542f80c670", 00:18:45.448 "strip_size_kb": 64, 00:18:45.448 "state": "online", 00:18:45.448 "raid_level": "raid0", 00:18:45.448 "superblock": true, 00:18:45.448 "num_base_bdevs": 3, 00:18:45.448 "num_base_bdevs_discovered": 3, 00:18:45.448 "num_base_bdevs_operational": 3, 00:18:45.448 "base_bdevs_list": [ 00:18:45.448 { 00:18:45.448 "name": "BaseBdev1", 00:18:45.448 "uuid": "08af1900-a2c9-50a4-94ed-9f40a1a7253c", 00:18:45.448 "is_configured": true, 00:18:45.448 "data_offset": 2048, 00:18:45.448 "data_size": 63488 00:18:45.448 }, 00:18:45.448 { 00:18:45.448 "name": "BaseBdev2", 00:18:45.448 "uuid": "5f27d27a-68e6-5470-8f61-a323b858eba7", 00:18:45.448 "is_configured": true, 00:18:45.448 "data_offset": 2048, 00:18:45.448 "data_size": 63488 00:18:45.448 }, 00:18:45.448 { 00:18:45.448 "name": "BaseBdev3", 00:18:45.448 "uuid": "95db60d8-538c-52a6-9196-d3097e0f6258", 00:18:45.448 "is_configured": true, 00:18:45.448 "data_offset": 2048, 00:18:45.448 "data_size": 63488 00:18:45.449 } 00:18:45.449 ] 00:18:45.449 }' 00:18:45.449 11:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.449 11:30:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.384 11:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:46.384 11:30:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:46.384 [2024-07-13 11:30:20.857583] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:47.319 11:30:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.319 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.578 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.578 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.578 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.578 "name": "raid_bdev1", 00:18:47.578 "uuid": "dfc24ceb-4c4d-48c4-a50e-74542f80c670", 00:18:47.578 "strip_size_kb": 64, 00:18:47.578 "state": "online", 00:18:47.578 "raid_level": "raid0", 00:18:47.578 "superblock": true, 00:18:47.578 "num_base_bdevs": 3, 00:18:47.578 "num_base_bdevs_discovered": 3, 00:18:47.578 "num_base_bdevs_operational": 3, 00:18:47.578 "base_bdevs_list": [ 00:18:47.578 { 00:18:47.578 "name": "BaseBdev1", 00:18:47.578 "uuid": "08af1900-a2c9-50a4-94ed-9f40a1a7253c", 00:18:47.578 "is_configured": true, 00:18:47.578 "data_offset": 2048, 00:18:47.578 "data_size": 63488 00:18:47.578 }, 00:18:47.578 { 00:18:47.578 "name": "BaseBdev2", 00:18:47.578 "uuid": "5f27d27a-68e6-5470-8f61-a323b858eba7", 00:18:47.578 "is_configured": true, 00:18:47.578 "data_offset": 2048, 00:18:47.578 "data_size": 63488 00:18:47.578 }, 00:18:47.578 { 00:18:47.578 "name": "BaseBdev3", 00:18:47.578 "uuid": "95db60d8-538c-52a6-9196-d3097e0f6258", 00:18:47.578 "is_configured": true, 00:18:47.578 "data_offset": 2048, 00:18:47.578 "data_size": 63488 00:18:47.578 } 00:18:47.578 ] 00:18:47.578 }' 00:18:47.578 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.578 11:30:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.512 11:30:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:48.512 [2024-07-13 11:30:23.092008] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:48.512 [2024-07-13 11:30:23.092374] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.512 [2024-07-13 11:30:23.095199] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.512 [2024-07-13 11:30:23.095385] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.512 [2024-07-13 11:30:23.095551] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.512 [2024-07-13 11:30:23.095655] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:48.512 0 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 127804 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 127804 ']' 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 127804 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 127804 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 127804' 00:18:48.512 killing process with pid 127804 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 127804 00:18:48.512 11:30:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 127804 00:18:48.512 [2024-07-13 11:30:23.128882] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:48.770 [2024-07-13 11:30:23.274887] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8gMa6v1gWV 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:49.704 ************************************ 00:18:49.704 END TEST raid_read_error_test 00:18:49.704 ************************************ 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:18:49.704 00:18:49.704 real 0m7.465s 00:18:49.704 user 0m11.496s 00:18:49.704 sys 0m0.824s 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.704 11:30:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.704 11:30:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:49.704 11:30:24 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:18:49.704 11:30:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:49.704 11:30:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.704 11:30:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.704 ************************************ 00:18:49.704 START TEST raid_write_error_test 00:18:49.704 ************************************ 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.u8f2eZYcfu 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=128008 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 128008 /var/tmp/spdk-raid.sock 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 128008 ']' 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:49.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.704 11:30:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.704 [2024-07-13 11:30:24.403222] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:49.704 [2024-07-13 11:30:24.403598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128008 ] 00:18:49.963 [2024-07-13 11:30:24.573448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.221 [2024-07-13 11:30:24.731496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.221 [2024-07-13 11:30:24.896007] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:50.787 11:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.787 11:30:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:50.787 11:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:50.787 11:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:50.787 BaseBdev1_malloc 00:18:50.787 11:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:51.046 true 00:18:51.046 11:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:51.305 [2024-07-13 11:30:25.962826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:51.305 [2024-07-13 11:30:25.963083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.305 [2024-07-13 11:30:25.963283] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:51.305 [2024-07-13 11:30:25.963412] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.305 [2024-07-13 11:30:25.965713] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.305 [2024-07-13 11:30:25.965914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.305 BaseBdev1 00:18:51.305 11:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:51.305 11:30:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:51.563 BaseBdev2_malloc 00:18:51.563 11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:51.859 true 00:18:51.859 11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:51.859 [2024-07-13 11:30:26.607789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:51.859 [2024-07-13 11:30:26.608032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.859 [2024-07-13 11:30:26.608231] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:52.131 [2024-07-13 11:30:26.608365] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.131 [2024-07-13 11:30:26.610622] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.131 [2024-07-13 11:30:26.610796] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:52.131 BaseBdev2 00:18:52.131 11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:52.131 11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:52.131 BaseBdev3_malloc 00:18:52.131 11:30:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:52.416 true 00:18:52.416 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:52.683 [2024-07-13 11:30:27.204695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:52.683 [2024-07-13 11:30:27.204932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.683 [2024-07-13 11:30:27.205086] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:52.683 [2024-07-13 11:30:27.205218] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.683 [2024-07-13 11:30:27.207637] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.683 [2024-07-13 11:30:27.207816] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:52.683 BaseBdev3 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:52.683 [2024-07-13 11:30:27.388781] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.683 [2024-07-13 11:30:27.390776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.683 [2024-07-13 11:30:27.391015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:52.683 [2024-07-13 11:30:27.391400] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:52.683 [2024-07-13 11:30:27.391550] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:52.683 [2024-07-13 11:30:27.391733] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:52.683 [2024-07-13 11:30:27.392217] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:52.683 [2024-07-13 11:30:27.392350] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:52.683 [2024-07-13 11:30:27.392575] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.683 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.942 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.942 "name": "raid_bdev1", 00:18:52.942 "uuid": "d231db8b-65ea-480c-8acd-7f524c7dd433", 00:18:52.942 "strip_size_kb": 64, 00:18:52.942 "state": "online", 00:18:52.942 "raid_level": "raid0", 00:18:52.942 "superblock": true, 00:18:52.942 "num_base_bdevs": 3, 00:18:52.942 "num_base_bdevs_discovered": 3, 00:18:52.942 "num_base_bdevs_operational": 3, 00:18:52.942 "base_bdevs_list": [ 00:18:52.942 { 00:18:52.942 "name": "BaseBdev1", 00:18:52.942 "uuid": "49213816-f6ae-5fcf-ae39-23145a57c4dd", 00:18:52.942 "is_configured": true, 00:18:52.942 "data_offset": 2048, 00:18:52.942 "data_size": 63488 00:18:52.942 }, 00:18:52.942 { 00:18:52.942 "name": "BaseBdev2", 00:18:52.942 "uuid": "236ae735-422a-5fd3-8d2b-2e9fe95c507b", 00:18:52.942 "is_configured": true, 00:18:52.942 "data_offset": 2048, 00:18:52.942 "data_size": 63488 00:18:52.942 }, 00:18:52.942 { 00:18:52.942 "name": "BaseBdev3", 00:18:52.942 "uuid": "8bb88407-1a4e-5cc8-9a15-6dfad39c6e95", 00:18:52.942 "is_configured": true, 00:18:52.942 "data_offset": 2048, 00:18:52.942 "data_size": 63488 00:18:52.942 } 00:18:52.942 ] 00:18:52.942 }' 00:18:52.942 11:30:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.942 11:30:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.875 11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:53.875 11:30:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:53.875 [2024-07-13 11:30:28.363948] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.806 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.065 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.065 "name": "raid_bdev1", 00:18:55.065 "uuid": "d231db8b-65ea-480c-8acd-7f524c7dd433", 00:18:55.065 "strip_size_kb": 64, 00:18:55.065 "state": "online", 00:18:55.065 "raid_level": "raid0", 00:18:55.065 "superblock": true, 00:18:55.065 "num_base_bdevs": 3, 00:18:55.065 "num_base_bdevs_discovered": 3, 00:18:55.065 "num_base_bdevs_operational": 3, 00:18:55.065 "base_bdevs_list": [ 00:18:55.065 { 00:18:55.065 "name": "BaseBdev1", 00:18:55.065 "uuid": "49213816-f6ae-5fcf-ae39-23145a57c4dd", 00:18:55.065 "is_configured": true, 00:18:55.065 "data_offset": 2048, 00:18:55.065 "data_size": 63488 00:18:55.065 }, 00:18:55.065 { 00:18:55.065 "name": "BaseBdev2", 00:18:55.065 "uuid": "236ae735-422a-5fd3-8d2b-2e9fe95c507b", 00:18:55.065 "is_configured": true, 00:18:55.065 "data_offset": 2048, 00:18:55.065 "data_size": 63488 00:18:55.065 }, 00:18:55.065 { 00:18:55.065 "name": "BaseBdev3", 00:18:55.065 "uuid": "8bb88407-1a4e-5cc8-9a15-6dfad39c6e95", 00:18:55.065 "is_configured": true, 00:18:55.065 "data_offset": 2048, 00:18:55.065 "data_size": 63488 00:18:55.065 } 00:18:55.065 ] 00:18:55.065 }' 00:18:55.065 11:30:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.065 11:30:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:56.000 [2024-07-13 11:30:30.667199] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:56.000 [2024-07-13 11:30:30.667538] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:56.000 [2024-07-13 11:30:30.670184] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.000 [2024-07-13 11:30:30.670366] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.000 [2024-07-13 11:30:30.670446] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.000 [2024-07-13 11:30:30.670681] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:56.000 0 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 128008 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 128008 ']' 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 128008 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128008 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128008' 00:18:56.000 killing process with pid 128008 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 128008 00:18:56.000 [2024-07-13 11:30:30.702345] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.000 11:30:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 128008 00:18:56.259 [2024-07-13 11:30:30.848440] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.u8f2eZYcfu 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:18:57.192 00:18:57.192 real 0m7.511s 00:18:57.192 user 0m11.588s 00:18:57.192 sys 0m0.798s 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:57.192 11:30:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.192 ************************************ 00:18:57.192 END TEST raid_write_error_test 00:18:57.192 ************************************ 00:18:57.192 11:30:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:57.192 11:30:31 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:57.192 11:30:31 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:18:57.192 11:30:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:57.192 11:30:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:57.192 11:30:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:57.192 ************************************ 00:18:57.192 START TEST raid_state_function_test 00:18:57.192 ************************************ 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=128216 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 128216' 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:57.192 Process raid pid: 128216 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 128216 /var/tmp/spdk-raid.sock 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 128216 ']' 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:57.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:57.192 11:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.450 [2024-07-13 11:30:31.957473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:57.450 [2024-07-13 11:30:31.957914] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.450 [2024-07-13 11:30:32.128451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.707 [2024-07-13 11:30:32.286450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.707 [2024-07-13 11:30:32.453430] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.276 11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.276 11:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:58.276 11:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:58.534 [2024-07-13 11:30:33.083655] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.534 [2024-07-13 11:30:33.083896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.534 [2024-07-13 11:30:33.084025] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:58.534 [2024-07-13 11:30:33.084161] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:58.534 [2024-07-13 11:30:33.084268] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:58.534 [2024-07-13 11:30:33.084328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.534 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.792 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:58.792 "name": "Existed_Raid", 00:18:58.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.792 "strip_size_kb": 64, 00:18:58.792 "state": "configuring", 00:18:58.792 "raid_level": "concat", 00:18:58.792 "superblock": false, 00:18:58.792 "num_base_bdevs": 3, 00:18:58.792 "num_base_bdevs_discovered": 0, 00:18:58.792 "num_base_bdevs_operational": 3, 00:18:58.792 "base_bdevs_list": [ 00:18:58.792 { 00:18:58.792 "name": "BaseBdev1", 00:18:58.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.792 "is_configured": false, 00:18:58.792 "data_offset": 0, 00:18:58.792 "data_size": 0 00:18:58.792 }, 00:18:58.792 { 00:18:58.792 "name": "BaseBdev2", 00:18:58.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.792 "is_configured": false, 00:18:58.792 "data_offset": 0, 00:18:58.792 "data_size": 0 00:18:58.792 }, 00:18:58.792 { 00:18:58.792 "name": "BaseBdev3", 00:18:58.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.792 "is_configured": false, 00:18:58.792 "data_offset": 0, 00:18:58.792 "data_size": 0 00:18:58.792 } 00:18:58.792 ] 00:18:58.792 }' 00:18:58.792 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:58.792 11:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.359 11:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:59.617 [2024-07-13 11:30:34.211696] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:59.617 [2024-07-13 11:30:34.211848] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:59.617 11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:59.876 [2024-07-13 11:30:34.403748] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:59.876 [2024-07-13 11:30:34.403957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:59.876 [2024-07-13 11:30:34.404067] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:59.876 [2024-07-13 11:30:34.404198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:59.876 [2024-07-13 11:30:34.404300] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:59.876 [2024-07-13 11:30:34.404423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:59.876 11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:59.876 [2024-07-13 11:30:34.623425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.876 BaseBdev1 00:19:00.134 11:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:00.134 11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:00.134 11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:00.134 11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:00.134 11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:00.134 11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:00.134 11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:00.393 11:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:00.393 [ 00:19:00.393 { 00:19:00.393 "name": "BaseBdev1", 00:19:00.393 "aliases": [ 00:19:00.393 "fa80f522-b673-4826-a4a8-0429b22f35a7" 00:19:00.393 ], 00:19:00.393 "product_name": "Malloc disk", 00:19:00.393 "block_size": 512, 00:19:00.393 "num_blocks": 65536, 00:19:00.393 "uuid": "fa80f522-b673-4826-a4a8-0429b22f35a7", 00:19:00.393 "assigned_rate_limits": { 00:19:00.393 "rw_ios_per_sec": 0, 00:19:00.393 "rw_mbytes_per_sec": 0, 00:19:00.393 "r_mbytes_per_sec": 0, 00:19:00.393 "w_mbytes_per_sec": 0 00:19:00.393 }, 00:19:00.393 "claimed": true, 00:19:00.393 "claim_type": "exclusive_write", 00:19:00.393 "zoned": false, 00:19:00.393 "supported_io_types": { 00:19:00.393 "read": true, 00:19:00.393 "write": true, 00:19:00.393 "unmap": true, 00:19:00.393 "flush": true, 00:19:00.393 "reset": true, 00:19:00.393 "nvme_admin": false, 00:19:00.393 "nvme_io": false, 00:19:00.393 "nvme_io_md": false, 00:19:00.393 "write_zeroes": true, 00:19:00.393 "zcopy": true, 00:19:00.393 "get_zone_info": false, 00:19:00.393 "zone_management": false, 00:19:00.393 "zone_append": false, 00:19:00.393 "compare": false, 00:19:00.393 "compare_and_write": false, 00:19:00.393 "abort": true, 00:19:00.393 "seek_hole": false, 00:19:00.393 "seek_data": false, 00:19:00.393 "copy": true, 00:19:00.393 "nvme_iov_md": false 00:19:00.393 }, 00:19:00.393 "memory_domains": [ 00:19:00.393 { 00:19:00.393 "dma_device_id": "system", 00:19:00.393 "dma_device_type": 1 00:19:00.393 }, 00:19:00.393 { 00:19:00.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.393 "dma_device_type": 2 00:19:00.393 } 00:19:00.393 ], 00:19:00.393 "driver_specific": {} 00:19:00.393 } 00:19:00.393 ] 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.393 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.652 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.652 "name": "Existed_Raid", 00:19:00.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.652 "strip_size_kb": 64, 00:19:00.652 "state": "configuring", 00:19:00.652 "raid_level": "concat", 00:19:00.652 "superblock": false, 00:19:00.652 "num_base_bdevs": 3, 00:19:00.652 "num_base_bdevs_discovered": 1, 00:19:00.652 "num_base_bdevs_operational": 3, 00:19:00.652 "base_bdevs_list": [ 00:19:00.652 { 00:19:00.652 "name": "BaseBdev1", 00:19:00.652 "uuid": "fa80f522-b673-4826-a4a8-0429b22f35a7", 00:19:00.652 "is_configured": true, 00:19:00.652 "data_offset": 0, 00:19:00.652 "data_size": 65536 00:19:00.652 }, 00:19:00.652 { 00:19:00.652 "name": "BaseBdev2", 00:19:00.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.652 "is_configured": false, 00:19:00.652 "data_offset": 0, 00:19:00.652 "data_size": 0 00:19:00.652 }, 00:19:00.652 { 00:19:00.652 "name": "BaseBdev3", 00:19:00.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.652 "is_configured": false, 00:19:00.652 "data_offset": 0, 00:19:00.652 "data_size": 0 00:19:00.652 } 00:19:00.652 ] 00:19:00.652 }' 00:19:00.652 11:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.652 11:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.586 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:01.586 [2024-07-13 11:30:36.247709] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:01.586 [2024-07-13 11:30:36.247896] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:19:01.586 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:01.844 [2024-07-13 11:30:36.515766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:01.844 [2024-07-13 11:30:36.517709] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.844 [2024-07-13 11:30:36.517922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.845 [2024-07-13 11:30:36.518035] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:01.845 [2024-07-13 11:30:36.518125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.845 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.103 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.103 "name": "Existed_Raid", 00:19:02.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.103 "strip_size_kb": 64, 00:19:02.103 "state": "configuring", 00:19:02.103 "raid_level": "concat", 00:19:02.103 "superblock": false, 00:19:02.103 "num_base_bdevs": 3, 00:19:02.103 "num_base_bdevs_discovered": 1, 00:19:02.103 "num_base_bdevs_operational": 3, 00:19:02.103 "base_bdevs_list": [ 00:19:02.103 { 00:19:02.103 "name": "BaseBdev1", 00:19:02.103 "uuid": "fa80f522-b673-4826-a4a8-0429b22f35a7", 00:19:02.103 "is_configured": true, 00:19:02.103 "data_offset": 0, 00:19:02.103 "data_size": 65536 00:19:02.103 }, 00:19:02.103 { 00:19:02.103 "name": "BaseBdev2", 00:19:02.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.103 "is_configured": false, 00:19:02.103 "data_offset": 0, 00:19:02.103 "data_size": 0 00:19:02.103 }, 00:19:02.103 { 00:19:02.103 "name": "BaseBdev3", 00:19:02.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.103 "is_configured": false, 00:19:02.103 "data_offset": 0, 00:19:02.103 "data_size": 0 00:19:02.103 } 00:19:02.103 ] 00:19:02.103 }' 00:19:02.103 11:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.103 11:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.669 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:02.927 [2024-07-13 11:30:37.635832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.927 BaseBdev2 00:19:02.927 11:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:02.927 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:02.927 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:02.927 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:02.927 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:02.927 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:02.927 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.185 11:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:03.444 [ 00:19:03.444 { 00:19:03.444 "name": "BaseBdev2", 00:19:03.444 "aliases": [ 00:19:03.444 "bde14500-cebe-4105-a899-6149cb793d69" 00:19:03.444 ], 00:19:03.444 "product_name": "Malloc disk", 00:19:03.444 "block_size": 512, 00:19:03.444 "num_blocks": 65536, 00:19:03.444 "uuid": "bde14500-cebe-4105-a899-6149cb793d69", 00:19:03.444 "assigned_rate_limits": { 00:19:03.444 "rw_ios_per_sec": 0, 00:19:03.444 "rw_mbytes_per_sec": 0, 00:19:03.444 "r_mbytes_per_sec": 0, 00:19:03.444 "w_mbytes_per_sec": 0 00:19:03.444 }, 00:19:03.444 "claimed": true, 00:19:03.444 "claim_type": "exclusive_write", 00:19:03.444 "zoned": false, 00:19:03.444 "supported_io_types": { 00:19:03.444 "read": true, 00:19:03.444 "write": true, 00:19:03.444 "unmap": true, 00:19:03.444 "flush": true, 00:19:03.444 "reset": true, 00:19:03.444 "nvme_admin": false, 00:19:03.444 "nvme_io": false, 00:19:03.444 "nvme_io_md": false, 00:19:03.444 "write_zeroes": true, 00:19:03.444 "zcopy": true, 00:19:03.444 "get_zone_info": false, 00:19:03.444 "zone_management": false, 00:19:03.444 "zone_append": false, 00:19:03.444 "compare": false, 00:19:03.444 "compare_and_write": false, 00:19:03.444 "abort": true, 00:19:03.444 "seek_hole": false, 00:19:03.444 "seek_data": false, 00:19:03.444 "copy": true, 00:19:03.444 "nvme_iov_md": false 00:19:03.444 }, 00:19:03.444 "memory_domains": [ 00:19:03.444 { 00:19:03.444 "dma_device_id": "system", 00:19:03.444 "dma_device_type": 1 00:19:03.444 }, 00:19:03.444 { 00:19:03.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.444 "dma_device_type": 2 00:19:03.444 } 00:19:03.444 ], 00:19:03.444 "driver_specific": {} 00:19:03.444 } 00:19:03.444 ] 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.444 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.702 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.702 "name": "Existed_Raid", 00:19:03.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.702 "strip_size_kb": 64, 00:19:03.702 "state": "configuring", 00:19:03.702 "raid_level": "concat", 00:19:03.702 "superblock": false, 00:19:03.702 "num_base_bdevs": 3, 00:19:03.702 "num_base_bdevs_discovered": 2, 00:19:03.702 "num_base_bdevs_operational": 3, 00:19:03.702 "base_bdevs_list": [ 00:19:03.702 { 00:19:03.702 "name": "BaseBdev1", 00:19:03.702 "uuid": "fa80f522-b673-4826-a4a8-0429b22f35a7", 00:19:03.702 "is_configured": true, 00:19:03.702 "data_offset": 0, 00:19:03.702 "data_size": 65536 00:19:03.702 }, 00:19:03.702 { 00:19:03.702 "name": "BaseBdev2", 00:19:03.702 "uuid": "bde14500-cebe-4105-a899-6149cb793d69", 00:19:03.702 "is_configured": true, 00:19:03.702 "data_offset": 0, 00:19:03.702 "data_size": 65536 00:19:03.702 }, 00:19:03.702 { 00:19:03.702 "name": "BaseBdev3", 00:19:03.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.703 "is_configured": false, 00:19:03.703 "data_offset": 0, 00:19:03.703 "data_size": 0 00:19:03.703 } 00:19:03.703 ] 00:19:03.703 }' 00:19:03.703 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.703 11:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.269 11:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:04.527 [2024-07-13 11:30:39.175422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:04.527 [2024-07-13 11:30:39.175697] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:04.527 [2024-07-13 11:30:39.175737] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:04.527 [2024-07-13 11:30:39.175972] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:04.527 [2024-07-13 11:30:39.176438] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:04.527 [2024-07-13 11:30:39.176567] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:04.527 [2024-07-13 11:30:39.176958] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:04.527 BaseBdev3 00:19:04.527 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:04.527 11:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:04.527 11:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:04.527 11:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:04.527 11:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:04.527 11:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:04.527 11:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:04.786 11:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:05.045 [ 00:19:05.045 { 00:19:05.045 "name": "BaseBdev3", 00:19:05.045 "aliases": [ 00:19:05.045 "2f03d732-9204-43fc-86e8-af0659fa61ea" 00:19:05.045 ], 00:19:05.045 "product_name": "Malloc disk", 00:19:05.045 "block_size": 512, 00:19:05.045 "num_blocks": 65536, 00:19:05.045 "uuid": "2f03d732-9204-43fc-86e8-af0659fa61ea", 00:19:05.045 "assigned_rate_limits": { 00:19:05.045 "rw_ios_per_sec": 0, 00:19:05.045 "rw_mbytes_per_sec": 0, 00:19:05.045 "r_mbytes_per_sec": 0, 00:19:05.046 "w_mbytes_per_sec": 0 00:19:05.046 }, 00:19:05.046 "claimed": true, 00:19:05.046 "claim_type": "exclusive_write", 00:19:05.046 "zoned": false, 00:19:05.046 "supported_io_types": { 00:19:05.046 "read": true, 00:19:05.046 "write": true, 00:19:05.046 "unmap": true, 00:19:05.046 "flush": true, 00:19:05.046 "reset": true, 00:19:05.046 "nvme_admin": false, 00:19:05.046 "nvme_io": false, 00:19:05.046 "nvme_io_md": false, 00:19:05.046 "write_zeroes": true, 00:19:05.046 "zcopy": true, 00:19:05.046 "get_zone_info": false, 00:19:05.046 "zone_management": false, 00:19:05.046 "zone_append": false, 00:19:05.046 "compare": false, 00:19:05.046 "compare_and_write": false, 00:19:05.046 "abort": true, 00:19:05.046 "seek_hole": false, 00:19:05.046 "seek_data": false, 00:19:05.046 "copy": true, 00:19:05.046 "nvme_iov_md": false 00:19:05.046 }, 00:19:05.046 "memory_domains": [ 00:19:05.046 { 00:19:05.046 "dma_device_id": "system", 00:19:05.046 "dma_device_type": 1 00:19:05.046 }, 00:19:05.046 { 00:19:05.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.046 "dma_device_type": 2 00:19:05.046 } 00:19:05.046 ], 00:19:05.046 "driver_specific": {} 00:19:05.046 } 00:19:05.046 ] 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.046 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.304 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.305 "name": "Existed_Raid", 00:19:05.305 "uuid": "b7a28b0c-ce4f-4edc-9c43-d20ae6dc5b5f", 00:19:05.305 "strip_size_kb": 64, 00:19:05.305 "state": "online", 00:19:05.305 "raid_level": "concat", 00:19:05.305 "superblock": false, 00:19:05.305 "num_base_bdevs": 3, 00:19:05.305 "num_base_bdevs_discovered": 3, 00:19:05.305 "num_base_bdevs_operational": 3, 00:19:05.305 "base_bdevs_list": [ 00:19:05.305 { 00:19:05.305 "name": "BaseBdev1", 00:19:05.305 "uuid": "fa80f522-b673-4826-a4a8-0429b22f35a7", 00:19:05.305 "is_configured": true, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 }, 00:19:05.305 { 00:19:05.305 "name": "BaseBdev2", 00:19:05.305 "uuid": "bde14500-cebe-4105-a899-6149cb793d69", 00:19:05.305 "is_configured": true, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 }, 00:19:05.305 { 00:19:05.305 "name": "BaseBdev3", 00:19:05.305 "uuid": "2f03d732-9204-43fc-86e8-af0659fa61ea", 00:19:05.305 "is_configured": true, 00:19:05.305 "data_offset": 0, 00:19:05.305 "data_size": 65536 00:19:05.305 } 00:19:05.305 ] 00:19:05.305 }' 00:19:05.305 11:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.305 11:30:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.871 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:05.871 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:05.872 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:05.872 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:05.872 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:05.872 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:05.872 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:05.872 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:06.130 [2024-07-13 11:30:40.659901] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.130 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:06.130 "name": "Existed_Raid", 00:19:06.130 "aliases": [ 00:19:06.130 "b7a28b0c-ce4f-4edc-9c43-d20ae6dc5b5f" 00:19:06.130 ], 00:19:06.130 "product_name": "Raid Volume", 00:19:06.130 "block_size": 512, 00:19:06.130 "num_blocks": 196608, 00:19:06.130 "uuid": "b7a28b0c-ce4f-4edc-9c43-d20ae6dc5b5f", 00:19:06.130 "assigned_rate_limits": { 00:19:06.130 "rw_ios_per_sec": 0, 00:19:06.130 "rw_mbytes_per_sec": 0, 00:19:06.130 "r_mbytes_per_sec": 0, 00:19:06.130 "w_mbytes_per_sec": 0 00:19:06.130 }, 00:19:06.130 "claimed": false, 00:19:06.130 "zoned": false, 00:19:06.130 "supported_io_types": { 00:19:06.130 "read": true, 00:19:06.130 "write": true, 00:19:06.130 "unmap": true, 00:19:06.130 "flush": true, 00:19:06.130 "reset": true, 00:19:06.130 "nvme_admin": false, 00:19:06.130 "nvme_io": false, 00:19:06.130 "nvme_io_md": false, 00:19:06.130 "write_zeroes": true, 00:19:06.130 "zcopy": false, 00:19:06.130 "get_zone_info": false, 00:19:06.130 "zone_management": false, 00:19:06.130 "zone_append": false, 00:19:06.130 "compare": false, 00:19:06.130 "compare_and_write": false, 00:19:06.130 "abort": false, 00:19:06.130 "seek_hole": false, 00:19:06.130 "seek_data": false, 00:19:06.130 "copy": false, 00:19:06.130 "nvme_iov_md": false 00:19:06.130 }, 00:19:06.130 "memory_domains": [ 00:19:06.130 { 00:19:06.130 "dma_device_id": "system", 00:19:06.130 "dma_device_type": 1 00:19:06.130 }, 00:19:06.130 { 00:19:06.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.130 "dma_device_type": 2 00:19:06.130 }, 00:19:06.130 { 00:19:06.130 "dma_device_id": "system", 00:19:06.130 "dma_device_type": 1 00:19:06.130 }, 00:19:06.130 { 00:19:06.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.130 "dma_device_type": 2 00:19:06.130 }, 00:19:06.130 { 00:19:06.130 "dma_device_id": "system", 00:19:06.130 "dma_device_type": 1 00:19:06.130 }, 00:19:06.130 { 00:19:06.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.130 "dma_device_type": 2 00:19:06.130 } 00:19:06.130 ], 00:19:06.130 "driver_specific": { 00:19:06.130 "raid": { 00:19:06.130 "uuid": "b7a28b0c-ce4f-4edc-9c43-d20ae6dc5b5f", 00:19:06.130 "strip_size_kb": 64, 00:19:06.130 "state": "online", 00:19:06.130 "raid_level": "concat", 00:19:06.130 "superblock": false, 00:19:06.130 "num_base_bdevs": 3, 00:19:06.130 "num_base_bdevs_discovered": 3, 00:19:06.130 "num_base_bdevs_operational": 3, 00:19:06.130 "base_bdevs_list": [ 00:19:06.130 { 00:19:06.130 "name": "BaseBdev1", 00:19:06.130 "uuid": "fa80f522-b673-4826-a4a8-0429b22f35a7", 00:19:06.130 "is_configured": true, 00:19:06.130 "data_offset": 0, 00:19:06.130 "data_size": 65536 00:19:06.130 }, 00:19:06.130 { 00:19:06.130 "name": "BaseBdev2", 00:19:06.130 "uuid": "bde14500-cebe-4105-a899-6149cb793d69", 00:19:06.130 "is_configured": true, 00:19:06.130 "data_offset": 0, 00:19:06.130 "data_size": 65536 00:19:06.130 }, 00:19:06.130 { 00:19:06.130 "name": "BaseBdev3", 00:19:06.130 "uuid": "2f03d732-9204-43fc-86e8-af0659fa61ea", 00:19:06.131 "is_configured": true, 00:19:06.131 "data_offset": 0, 00:19:06.131 "data_size": 65536 00:19:06.131 } 00:19:06.131 ] 00:19:06.131 } 00:19:06.131 } 00:19:06.131 }' 00:19:06.131 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:06.131 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:06.131 BaseBdev2 00:19:06.131 BaseBdev3' 00:19:06.131 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:06.131 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:06.131 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:06.389 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:06.389 "name": "BaseBdev1", 00:19:06.389 "aliases": [ 00:19:06.389 "fa80f522-b673-4826-a4a8-0429b22f35a7" 00:19:06.389 ], 00:19:06.389 "product_name": "Malloc disk", 00:19:06.389 "block_size": 512, 00:19:06.389 "num_blocks": 65536, 00:19:06.389 "uuid": "fa80f522-b673-4826-a4a8-0429b22f35a7", 00:19:06.389 "assigned_rate_limits": { 00:19:06.389 "rw_ios_per_sec": 0, 00:19:06.389 "rw_mbytes_per_sec": 0, 00:19:06.389 "r_mbytes_per_sec": 0, 00:19:06.389 "w_mbytes_per_sec": 0 00:19:06.389 }, 00:19:06.389 "claimed": true, 00:19:06.389 "claim_type": "exclusive_write", 00:19:06.389 "zoned": false, 00:19:06.389 "supported_io_types": { 00:19:06.389 "read": true, 00:19:06.389 "write": true, 00:19:06.389 "unmap": true, 00:19:06.389 "flush": true, 00:19:06.389 "reset": true, 00:19:06.389 "nvme_admin": false, 00:19:06.389 "nvme_io": false, 00:19:06.389 "nvme_io_md": false, 00:19:06.389 "write_zeroes": true, 00:19:06.389 "zcopy": true, 00:19:06.389 "get_zone_info": false, 00:19:06.389 "zone_management": false, 00:19:06.389 "zone_append": false, 00:19:06.389 "compare": false, 00:19:06.389 "compare_and_write": false, 00:19:06.389 "abort": true, 00:19:06.389 "seek_hole": false, 00:19:06.389 "seek_data": false, 00:19:06.389 "copy": true, 00:19:06.389 "nvme_iov_md": false 00:19:06.389 }, 00:19:06.389 "memory_domains": [ 00:19:06.389 { 00:19:06.389 "dma_device_id": "system", 00:19:06.389 "dma_device_type": 1 00:19:06.389 }, 00:19:06.389 { 00:19:06.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.389 "dma_device_type": 2 00:19:06.389 } 00:19:06.389 ], 00:19:06.389 "driver_specific": {} 00:19:06.389 }' 00:19:06.389 11:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.389 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.389 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:06.389 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.389 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.647 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:06.647 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.647 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.647 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:06.647 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.647 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.905 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:06.906 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:06.906 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:06.906 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:06.906 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:06.906 "name": "BaseBdev2", 00:19:06.906 "aliases": [ 00:19:06.906 "bde14500-cebe-4105-a899-6149cb793d69" 00:19:06.906 ], 00:19:06.906 "product_name": "Malloc disk", 00:19:06.906 "block_size": 512, 00:19:06.906 "num_blocks": 65536, 00:19:06.906 "uuid": "bde14500-cebe-4105-a899-6149cb793d69", 00:19:06.906 "assigned_rate_limits": { 00:19:06.906 "rw_ios_per_sec": 0, 00:19:06.906 "rw_mbytes_per_sec": 0, 00:19:06.906 "r_mbytes_per_sec": 0, 00:19:06.906 "w_mbytes_per_sec": 0 00:19:06.906 }, 00:19:06.906 "claimed": true, 00:19:06.906 "claim_type": "exclusive_write", 00:19:06.906 "zoned": false, 00:19:06.906 "supported_io_types": { 00:19:06.906 "read": true, 00:19:06.906 "write": true, 00:19:06.906 "unmap": true, 00:19:06.906 "flush": true, 00:19:06.906 "reset": true, 00:19:06.906 "nvme_admin": false, 00:19:06.906 "nvme_io": false, 00:19:06.906 "nvme_io_md": false, 00:19:06.906 "write_zeroes": true, 00:19:06.906 "zcopy": true, 00:19:06.906 "get_zone_info": false, 00:19:06.906 "zone_management": false, 00:19:06.906 "zone_append": false, 00:19:06.906 "compare": false, 00:19:06.906 "compare_and_write": false, 00:19:06.906 "abort": true, 00:19:06.906 "seek_hole": false, 00:19:06.906 "seek_data": false, 00:19:06.906 "copy": true, 00:19:06.906 "nvme_iov_md": false 00:19:06.906 }, 00:19:06.906 "memory_domains": [ 00:19:06.906 { 00:19:06.906 "dma_device_id": "system", 00:19:06.906 "dma_device_type": 1 00:19:06.906 }, 00:19:06.906 { 00:19:06.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.906 "dma_device_type": 2 00:19:06.906 } 00:19:06.906 ], 00:19:06.906 "driver_specific": {} 00:19:06.906 }' 00:19:06.906 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.906 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.164 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.164 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.164 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.164 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.164 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.164 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.423 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.423 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.423 11:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.423 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.423 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:07.423 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:07.423 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:07.682 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:07.682 "name": "BaseBdev3", 00:19:07.682 "aliases": [ 00:19:07.682 "2f03d732-9204-43fc-86e8-af0659fa61ea" 00:19:07.682 ], 00:19:07.682 "product_name": "Malloc disk", 00:19:07.682 "block_size": 512, 00:19:07.682 "num_blocks": 65536, 00:19:07.682 "uuid": "2f03d732-9204-43fc-86e8-af0659fa61ea", 00:19:07.682 "assigned_rate_limits": { 00:19:07.682 "rw_ios_per_sec": 0, 00:19:07.682 "rw_mbytes_per_sec": 0, 00:19:07.682 "r_mbytes_per_sec": 0, 00:19:07.682 "w_mbytes_per_sec": 0 00:19:07.682 }, 00:19:07.682 "claimed": true, 00:19:07.682 "claim_type": "exclusive_write", 00:19:07.682 "zoned": false, 00:19:07.682 "supported_io_types": { 00:19:07.682 "read": true, 00:19:07.682 "write": true, 00:19:07.682 "unmap": true, 00:19:07.682 "flush": true, 00:19:07.682 "reset": true, 00:19:07.682 "nvme_admin": false, 00:19:07.682 "nvme_io": false, 00:19:07.682 "nvme_io_md": false, 00:19:07.682 "write_zeroes": true, 00:19:07.682 "zcopy": true, 00:19:07.682 "get_zone_info": false, 00:19:07.682 "zone_management": false, 00:19:07.682 "zone_append": false, 00:19:07.682 "compare": false, 00:19:07.682 "compare_and_write": false, 00:19:07.682 "abort": true, 00:19:07.682 "seek_hole": false, 00:19:07.682 "seek_data": false, 00:19:07.682 "copy": true, 00:19:07.682 "nvme_iov_md": false 00:19:07.682 }, 00:19:07.682 "memory_domains": [ 00:19:07.682 { 00:19:07.682 "dma_device_id": "system", 00:19:07.682 "dma_device_type": 1 00:19:07.682 }, 00:19:07.682 { 00:19:07.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.682 "dma_device_type": 2 00:19:07.682 } 00:19:07.682 ], 00:19:07.682 "driver_specific": {} 00:19:07.682 }' 00:19:07.682 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.682 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:07.941 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:07.941 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.941 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:07.941 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:07.941 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.941 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:07.941 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:07.941 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.200 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:08.200 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:08.200 11:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:08.458 [2024-07-13 11:30:43.020156] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.458 [2024-07-13 11:30:43.020297] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:08.458 [2024-07-13 11:30:43.020453] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.458 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:08.458 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:08.458 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:08.458 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:08.458 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:08.458 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:08.458 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.459 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.717 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.717 "name": "Existed_Raid", 00:19:08.717 "uuid": "b7a28b0c-ce4f-4edc-9c43-d20ae6dc5b5f", 00:19:08.717 "strip_size_kb": 64, 00:19:08.717 "state": "offline", 00:19:08.717 "raid_level": "concat", 00:19:08.717 "superblock": false, 00:19:08.717 "num_base_bdevs": 3, 00:19:08.717 "num_base_bdevs_discovered": 2, 00:19:08.717 "num_base_bdevs_operational": 2, 00:19:08.717 "base_bdevs_list": [ 00:19:08.717 { 00:19:08.717 "name": null, 00:19:08.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.717 "is_configured": false, 00:19:08.717 "data_offset": 0, 00:19:08.717 "data_size": 65536 00:19:08.717 }, 00:19:08.717 { 00:19:08.717 "name": "BaseBdev2", 00:19:08.717 "uuid": "bde14500-cebe-4105-a899-6149cb793d69", 00:19:08.717 "is_configured": true, 00:19:08.717 "data_offset": 0, 00:19:08.717 "data_size": 65536 00:19:08.717 }, 00:19:08.717 { 00:19:08.717 "name": "BaseBdev3", 00:19:08.717 "uuid": "2f03d732-9204-43fc-86e8-af0659fa61ea", 00:19:08.717 "is_configured": true, 00:19:08.717 "data_offset": 0, 00:19:08.717 "data_size": 65536 00:19:08.717 } 00:19:08.717 ] 00:19:08.717 }' 00:19:08.717 11:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.717 11:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:09.654 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:09.654 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:09.654 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.654 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:09.654 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:09.654 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:09.654 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:09.913 [2024-07-13 11:30:44.595893] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.172 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:10.172 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:10.172 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:10.172 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.430 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:10.430 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.430 11:30:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:10.688 [2024-07-13 11:30:45.239786] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:10.688 [2024-07-13 11:30:45.239964] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:10.688 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:10.688 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:10.688 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.688 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:10.945 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:10.945 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:10.945 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:10.945 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:10.945 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:10.945 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:11.204 BaseBdev2 00:19:11.204 11:30:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:11.204 11:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:11.204 11:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:11.204 11:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:11.204 11:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:11.204 11:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:11.204 11:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:11.462 11:30:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:11.721 [ 00:19:11.721 { 00:19:11.721 "name": "BaseBdev2", 00:19:11.721 "aliases": [ 00:19:11.721 "5a23f97b-ce64-4b5f-b17c-09ad5d61405d" 00:19:11.721 ], 00:19:11.721 "product_name": "Malloc disk", 00:19:11.721 "block_size": 512, 00:19:11.721 "num_blocks": 65536, 00:19:11.721 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:11.721 "assigned_rate_limits": { 00:19:11.721 "rw_ios_per_sec": 0, 00:19:11.721 "rw_mbytes_per_sec": 0, 00:19:11.721 "r_mbytes_per_sec": 0, 00:19:11.721 "w_mbytes_per_sec": 0 00:19:11.721 }, 00:19:11.721 "claimed": false, 00:19:11.721 "zoned": false, 00:19:11.721 "supported_io_types": { 00:19:11.721 "read": true, 00:19:11.721 "write": true, 00:19:11.721 "unmap": true, 00:19:11.721 "flush": true, 00:19:11.721 "reset": true, 00:19:11.721 "nvme_admin": false, 00:19:11.721 "nvme_io": false, 00:19:11.721 "nvme_io_md": false, 00:19:11.721 "write_zeroes": true, 00:19:11.721 "zcopy": true, 00:19:11.721 "get_zone_info": false, 00:19:11.721 "zone_management": false, 00:19:11.721 "zone_append": false, 00:19:11.721 "compare": false, 00:19:11.721 "compare_and_write": false, 00:19:11.721 "abort": true, 00:19:11.721 "seek_hole": false, 00:19:11.721 "seek_data": false, 00:19:11.721 "copy": true, 00:19:11.721 "nvme_iov_md": false 00:19:11.721 }, 00:19:11.721 "memory_domains": [ 00:19:11.721 { 00:19:11.721 "dma_device_id": "system", 00:19:11.721 "dma_device_type": 1 00:19:11.721 }, 00:19:11.721 { 00:19:11.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.721 "dma_device_type": 2 00:19:11.721 } 00:19:11.721 ], 00:19:11.721 "driver_specific": {} 00:19:11.721 } 00:19:11.721 ] 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:11.721 BaseBdev3 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:11.721 11:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:11.980 11:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:12.239 [ 00:19:12.239 { 00:19:12.239 "name": "BaseBdev3", 00:19:12.239 "aliases": [ 00:19:12.239 "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3" 00:19:12.239 ], 00:19:12.239 "product_name": "Malloc disk", 00:19:12.239 "block_size": 512, 00:19:12.239 "num_blocks": 65536, 00:19:12.239 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:12.239 "assigned_rate_limits": { 00:19:12.239 "rw_ios_per_sec": 0, 00:19:12.239 "rw_mbytes_per_sec": 0, 00:19:12.239 "r_mbytes_per_sec": 0, 00:19:12.239 "w_mbytes_per_sec": 0 00:19:12.239 }, 00:19:12.239 "claimed": false, 00:19:12.239 "zoned": false, 00:19:12.239 "supported_io_types": { 00:19:12.239 "read": true, 00:19:12.239 "write": true, 00:19:12.239 "unmap": true, 00:19:12.239 "flush": true, 00:19:12.239 "reset": true, 00:19:12.239 "nvme_admin": false, 00:19:12.239 "nvme_io": false, 00:19:12.239 "nvme_io_md": false, 00:19:12.239 "write_zeroes": true, 00:19:12.239 "zcopy": true, 00:19:12.239 "get_zone_info": false, 00:19:12.239 "zone_management": false, 00:19:12.239 "zone_append": false, 00:19:12.239 "compare": false, 00:19:12.239 "compare_and_write": false, 00:19:12.239 "abort": true, 00:19:12.239 "seek_hole": false, 00:19:12.239 "seek_data": false, 00:19:12.239 "copy": true, 00:19:12.239 "nvme_iov_md": false 00:19:12.239 }, 00:19:12.239 "memory_domains": [ 00:19:12.239 { 00:19:12.239 "dma_device_id": "system", 00:19:12.239 "dma_device_type": 1 00:19:12.239 }, 00:19:12.239 { 00:19:12.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.239 "dma_device_type": 2 00:19:12.239 } 00:19:12.239 ], 00:19:12.239 "driver_specific": {} 00:19:12.239 } 00:19:12.239 ] 00:19:12.239 11:30:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:12.239 11:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:12.239 11:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:12.239 11:30:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:12.497 [2024-07-13 11:30:47.086166] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.497 [2024-07-13 11:30:47.086346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.497 [2024-07-13 11:30:47.086473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:12.497 [2024-07-13 11:30:47.088646] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:12.497 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:12.497 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:12.497 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:12.497 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:12.497 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:12.498 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:12.498 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.498 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.498 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.498 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.498 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.498 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.756 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.756 "name": "Existed_Raid", 00:19:12.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.756 "strip_size_kb": 64, 00:19:12.756 "state": "configuring", 00:19:12.756 "raid_level": "concat", 00:19:12.756 "superblock": false, 00:19:12.756 "num_base_bdevs": 3, 00:19:12.756 "num_base_bdevs_discovered": 2, 00:19:12.756 "num_base_bdevs_operational": 3, 00:19:12.756 "base_bdevs_list": [ 00:19:12.756 { 00:19:12.756 "name": "BaseBdev1", 00:19:12.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.756 "is_configured": false, 00:19:12.756 "data_offset": 0, 00:19:12.756 "data_size": 0 00:19:12.756 }, 00:19:12.756 { 00:19:12.756 "name": "BaseBdev2", 00:19:12.756 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:12.756 "is_configured": true, 00:19:12.756 "data_offset": 0, 00:19:12.756 "data_size": 65536 00:19:12.756 }, 00:19:12.756 { 00:19:12.756 "name": "BaseBdev3", 00:19:12.756 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:12.756 "is_configured": true, 00:19:12.757 "data_offset": 0, 00:19:12.757 "data_size": 65536 00:19:12.757 } 00:19:12.757 ] 00:19:12.757 }' 00:19:12.757 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.757 11:30:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:13.325 11:30:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:13.584 [2024-07-13 11:30:48.130347] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.584 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.842 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.842 "name": "Existed_Raid", 00:19:13.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.843 "strip_size_kb": 64, 00:19:13.843 "state": "configuring", 00:19:13.843 "raid_level": "concat", 00:19:13.843 "superblock": false, 00:19:13.843 "num_base_bdevs": 3, 00:19:13.843 "num_base_bdevs_discovered": 1, 00:19:13.843 "num_base_bdevs_operational": 3, 00:19:13.843 "base_bdevs_list": [ 00:19:13.843 { 00:19:13.843 "name": "BaseBdev1", 00:19:13.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:13.843 "is_configured": false, 00:19:13.843 "data_offset": 0, 00:19:13.843 "data_size": 0 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "name": null, 00:19:13.843 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:13.843 "is_configured": false, 00:19:13.843 "data_offset": 0, 00:19:13.843 "data_size": 65536 00:19:13.843 }, 00:19:13.843 { 00:19:13.843 "name": "BaseBdev3", 00:19:13.843 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:13.843 "is_configured": true, 00:19:13.843 "data_offset": 0, 00:19:13.843 "data_size": 65536 00:19:13.843 } 00:19:13.843 ] 00:19:13.843 }' 00:19:13.843 11:30:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.843 11:30:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.410 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.410 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:14.668 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:14.668 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:14.927 [2024-07-13 11:30:49.479744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:14.927 BaseBdev1 00:19:14.927 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:14.927 11:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:14.927 11:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:14.927 11:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:14.927 11:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:14.927 11:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:14.927 11:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:15.189 [ 00:19:15.189 { 00:19:15.189 "name": "BaseBdev1", 00:19:15.189 "aliases": [ 00:19:15.189 "02b97cf9-fcc2-4876-9bd2-1a8da2660d59" 00:19:15.189 ], 00:19:15.189 "product_name": "Malloc disk", 00:19:15.189 "block_size": 512, 00:19:15.189 "num_blocks": 65536, 00:19:15.189 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:15.189 "assigned_rate_limits": { 00:19:15.189 "rw_ios_per_sec": 0, 00:19:15.189 "rw_mbytes_per_sec": 0, 00:19:15.189 "r_mbytes_per_sec": 0, 00:19:15.189 "w_mbytes_per_sec": 0 00:19:15.189 }, 00:19:15.189 "claimed": true, 00:19:15.189 "claim_type": "exclusive_write", 00:19:15.189 "zoned": false, 00:19:15.189 "supported_io_types": { 00:19:15.189 "read": true, 00:19:15.189 "write": true, 00:19:15.189 "unmap": true, 00:19:15.189 "flush": true, 00:19:15.189 "reset": true, 00:19:15.189 "nvme_admin": false, 00:19:15.189 "nvme_io": false, 00:19:15.189 "nvme_io_md": false, 00:19:15.189 "write_zeroes": true, 00:19:15.189 "zcopy": true, 00:19:15.189 "get_zone_info": false, 00:19:15.189 "zone_management": false, 00:19:15.189 "zone_append": false, 00:19:15.189 "compare": false, 00:19:15.189 "compare_and_write": false, 00:19:15.189 "abort": true, 00:19:15.189 "seek_hole": false, 00:19:15.189 "seek_data": false, 00:19:15.189 "copy": true, 00:19:15.189 "nvme_iov_md": false 00:19:15.189 }, 00:19:15.189 "memory_domains": [ 00:19:15.189 { 00:19:15.189 "dma_device_id": "system", 00:19:15.189 "dma_device_type": 1 00:19:15.189 }, 00:19:15.189 { 00:19:15.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.189 "dma_device_type": 2 00:19:15.189 } 00:19:15.189 ], 00:19:15.189 "driver_specific": {} 00:19:15.189 } 00:19:15.189 ] 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.189 11:30:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:15.465 11:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:15.465 "name": "Existed_Raid", 00:19:15.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:15.465 "strip_size_kb": 64, 00:19:15.465 "state": "configuring", 00:19:15.465 "raid_level": "concat", 00:19:15.465 "superblock": false, 00:19:15.465 "num_base_bdevs": 3, 00:19:15.465 "num_base_bdevs_discovered": 2, 00:19:15.465 "num_base_bdevs_operational": 3, 00:19:15.465 "base_bdevs_list": [ 00:19:15.465 { 00:19:15.465 "name": "BaseBdev1", 00:19:15.465 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:15.465 "is_configured": true, 00:19:15.465 "data_offset": 0, 00:19:15.465 "data_size": 65536 00:19:15.465 }, 00:19:15.465 { 00:19:15.465 "name": null, 00:19:15.465 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:15.465 "is_configured": false, 00:19:15.465 "data_offset": 0, 00:19:15.465 "data_size": 65536 00:19:15.465 }, 00:19:15.465 { 00:19:15.465 "name": "BaseBdev3", 00:19:15.465 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:15.465 "is_configured": true, 00:19:15.465 "data_offset": 0, 00:19:15.465 "data_size": 65536 00:19:15.465 } 00:19:15.465 ] 00:19:15.465 }' 00:19:15.465 11:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:15.465 11:30:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:16.414 11:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.414 11:30:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:16.414 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:16.414 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:16.673 [2024-07-13 11:30:51.344062] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:16.673 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:16.931 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:16.931 "name": "Existed_Raid", 00:19:16.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:16.931 "strip_size_kb": 64, 00:19:16.931 "state": "configuring", 00:19:16.931 "raid_level": "concat", 00:19:16.931 "superblock": false, 00:19:16.931 "num_base_bdevs": 3, 00:19:16.931 "num_base_bdevs_discovered": 1, 00:19:16.931 "num_base_bdevs_operational": 3, 00:19:16.931 "base_bdevs_list": [ 00:19:16.931 { 00:19:16.931 "name": "BaseBdev1", 00:19:16.931 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:16.931 "is_configured": true, 00:19:16.931 "data_offset": 0, 00:19:16.931 "data_size": 65536 00:19:16.931 }, 00:19:16.931 { 00:19:16.931 "name": null, 00:19:16.931 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:16.931 "is_configured": false, 00:19:16.931 "data_offset": 0, 00:19:16.931 "data_size": 65536 00:19:16.931 }, 00:19:16.932 { 00:19:16.932 "name": null, 00:19:16.932 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:16.932 "is_configured": false, 00:19:16.932 "data_offset": 0, 00:19:16.932 "data_size": 65536 00:19:16.932 } 00:19:16.932 ] 00:19:16.932 }' 00:19:16.932 11:30:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:16.932 11:30:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.499 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.499 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:17.757 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:17.757 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:18.016 [2024-07-13 11:30:52.664263] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.016 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:18.280 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.280 "name": "Existed_Raid", 00:19:18.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:18.280 "strip_size_kb": 64, 00:19:18.280 "state": "configuring", 00:19:18.280 "raid_level": "concat", 00:19:18.280 "superblock": false, 00:19:18.280 "num_base_bdevs": 3, 00:19:18.280 "num_base_bdevs_discovered": 2, 00:19:18.280 "num_base_bdevs_operational": 3, 00:19:18.280 "base_bdevs_list": [ 00:19:18.280 { 00:19:18.280 "name": "BaseBdev1", 00:19:18.280 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:18.281 "is_configured": true, 00:19:18.281 "data_offset": 0, 00:19:18.281 "data_size": 65536 00:19:18.281 }, 00:19:18.281 { 00:19:18.281 "name": null, 00:19:18.281 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:18.281 "is_configured": false, 00:19:18.281 "data_offset": 0, 00:19:18.281 "data_size": 65536 00:19:18.281 }, 00:19:18.281 { 00:19:18.281 "name": "BaseBdev3", 00:19:18.281 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:18.281 "is_configured": true, 00:19:18.281 "data_offset": 0, 00:19:18.281 "data_size": 65536 00:19:18.281 } 00:19:18.281 ] 00:19:18.281 }' 00:19:18.281 11:30:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.281 11:30:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.218 11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.218 11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:19.218 11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:19.218 11:30:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:19.477 [2024-07-13 11:30:54.124558] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:19.477 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:19.735 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:19.735 "name": "Existed_Raid", 00:19:19.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.735 "strip_size_kb": 64, 00:19:19.735 "state": "configuring", 00:19:19.735 "raid_level": "concat", 00:19:19.735 "superblock": false, 00:19:19.735 "num_base_bdevs": 3, 00:19:19.735 "num_base_bdevs_discovered": 1, 00:19:19.735 "num_base_bdevs_operational": 3, 00:19:19.735 "base_bdevs_list": [ 00:19:19.735 { 00:19:19.736 "name": null, 00:19:19.736 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:19.736 "is_configured": false, 00:19:19.736 "data_offset": 0, 00:19:19.736 "data_size": 65536 00:19:19.736 }, 00:19:19.736 { 00:19:19.736 "name": null, 00:19:19.736 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:19.736 "is_configured": false, 00:19:19.736 "data_offset": 0, 00:19:19.736 "data_size": 65536 00:19:19.736 }, 00:19:19.736 { 00:19:19.736 "name": "BaseBdev3", 00:19:19.736 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:19.736 "is_configured": true, 00:19:19.736 "data_offset": 0, 00:19:19.736 "data_size": 65536 00:19:19.736 } 00:19:19.736 ] 00:19:19.736 }' 00:19:19.736 11:30:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:19.736 11:30:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.671 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.671 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:20.671 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:20.671 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:20.929 [2024-07-13 11:30:55.624063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.929 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:20.929 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.929 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:20.929 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:20.929 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.929 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:20.929 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.930 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.930 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.930 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.930 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.930 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.189 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.189 "name": "Existed_Raid", 00:19:21.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.189 "strip_size_kb": 64, 00:19:21.189 "state": "configuring", 00:19:21.189 "raid_level": "concat", 00:19:21.189 "superblock": false, 00:19:21.189 "num_base_bdevs": 3, 00:19:21.189 "num_base_bdevs_discovered": 2, 00:19:21.189 "num_base_bdevs_operational": 3, 00:19:21.189 "base_bdevs_list": [ 00:19:21.189 { 00:19:21.189 "name": null, 00:19:21.189 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:21.189 "is_configured": false, 00:19:21.189 "data_offset": 0, 00:19:21.189 "data_size": 65536 00:19:21.189 }, 00:19:21.189 { 00:19:21.189 "name": "BaseBdev2", 00:19:21.189 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:21.189 "is_configured": true, 00:19:21.189 "data_offset": 0, 00:19:21.189 "data_size": 65536 00:19:21.189 }, 00:19:21.189 { 00:19:21.189 "name": "BaseBdev3", 00:19:21.189 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:21.189 "is_configured": true, 00:19:21.189 "data_offset": 0, 00:19:21.189 "data_size": 65536 00:19:21.189 } 00:19:21.189 ] 00:19:21.189 }' 00:19:21.189 11:30:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.189 11:30:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.757 11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.757 11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:22.015 11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:22.015 11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.015 11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:22.273 11:30:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 02b97cf9-fcc2-4876-9bd2-1a8da2660d59 00:19:22.531 [2024-07-13 11:30:57.132975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:22.531 [2024-07-13 11:30:57.133210] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:22.531 [2024-07-13 11:30:57.133265] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:22.531 [2024-07-13 11:30:57.133577] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:22.531 [2024-07-13 11:30:57.134179] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:22.531 [2024-07-13 11:30:57.134326] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:19:22.531 NewBaseBdev 00:19:22.531 [2024-07-13 11:30:57.134787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.531 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:22.531 11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:22.531 11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:22.531 11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:22.531 11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:22.531 11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:22.531 11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:22.789 11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:23.046 [ 00:19:23.047 { 00:19:23.047 "name": "NewBaseBdev", 00:19:23.047 "aliases": [ 00:19:23.047 "02b97cf9-fcc2-4876-9bd2-1a8da2660d59" 00:19:23.047 ], 00:19:23.047 "product_name": "Malloc disk", 00:19:23.047 "block_size": 512, 00:19:23.047 "num_blocks": 65536, 00:19:23.047 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:23.047 "assigned_rate_limits": { 00:19:23.047 "rw_ios_per_sec": 0, 00:19:23.047 "rw_mbytes_per_sec": 0, 00:19:23.047 "r_mbytes_per_sec": 0, 00:19:23.047 "w_mbytes_per_sec": 0 00:19:23.047 }, 00:19:23.047 "claimed": true, 00:19:23.047 "claim_type": "exclusive_write", 00:19:23.047 "zoned": false, 00:19:23.047 "supported_io_types": { 00:19:23.047 "read": true, 00:19:23.047 "write": true, 00:19:23.047 "unmap": true, 00:19:23.047 "flush": true, 00:19:23.047 "reset": true, 00:19:23.047 "nvme_admin": false, 00:19:23.047 "nvme_io": false, 00:19:23.047 "nvme_io_md": false, 00:19:23.047 "write_zeroes": true, 00:19:23.047 "zcopy": true, 00:19:23.047 "get_zone_info": false, 00:19:23.047 "zone_management": false, 00:19:23.047 "zone_append": false, 00:19:23.047 "compare": false, 00:19:23.047 "compare_and_write": false, 00:19:23.047 "abort": true, 00:19:23.047 "seek_hole": false, 00:19:23.047 "seek_data": false, 00:19:23.047 "copy": true, 00:19:23.047 "nvme_iov_md": false 00:19:23.047 }, 00:19:23.047 "memory_domains": [ 00:19:23.047 { 00:19:23.047 "dma_device_id": "system", 00:19:23.047 "dma_device_type": 1 00:19:23.047 }, 00:19:23.047 { 00:19:23.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.047 "dma_device_type": 2 00:19:23.047 } 00:19:23.047 ], 00:19:23.047 "driver_specific": {} 00:19:23.047 } 00:19:23.047 ] 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.047 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.305 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.305 "name": "Existed_Raid", 00:19:23.305 "uuid": "42e9ba0c-71ca-477c-81dc-be59ad1dd1af", 00:19:23.305 "strip_size_kb": 64, 00:19:23.305 "state": "online", 00:19:23.305 "raid_level": "concat", 00:19:23.305 "superblock": false, 00:19:23.305 "num_base_bdevs": 3, 00:19:23.305 "num_base_bdevs_discovered": 3, 00:19:23.305 "num_base_bdevs_operational": 3, 00:19:23.305 "base_bdevs_list": [ 00:19:23.305 { 00:19:23.305 "name": "NewBaseBdev", 00:19:23.305 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:23.305 "is_configured": true, 00:19:23.305 "data_offset": 0, 00:19:23.305 "data_size": 65536 00:19:23.305 }, 00:19:23.305 { 00:19:23.305 "name": "BaseBdev2", 00:19:23.305 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:23.305 "is_configured": true, 00:19:23.305 "data_offset": 0, 00:19:23.305 "data_size": 65536 00:19:23.305 }, 00:19:23.305 { 00:19:23.305 "name": "BaseBdev3", 00:19:23.305 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:23.305 "is_configured": true, 00:19:23.305 "data_offset": 0, 00:19:23.305 "data_size": 65536 00:19:23.305 } 00:19:23.305 ] 00:19:23.305 }' 00:19:23.305 11:30:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.305 11:30:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.871 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:23.871 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:23.871 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:23.871 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:23.871 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:23.871 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:23.871 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:23.871 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:24.130 [2024-07-13 11:30:58.698129] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.130 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:24.130 "name": "Existed_Raid", 00:19:24.130 "aliases": [ 00:19:24.130 "42e9ba0c-71ca-477c-81dc-be59ad1dd1af" 00:19:24.130 ], 00:19:24.130 "product_name": "Raid Volume", 00:19:24.130 "block_size": 512, 00:19:24.130 "num_blocks": 196608, 00:19:24.130 "uuid": "42e9ba0c-71ca-477c-81dc-be59ad1dd1af", 00:19:24.130 "assigned_rate_limits": { 00:19:24.130 "rw_ios_per_sec": 0, 00:19:24.130 "rw_mbytes_per_sec": 0, 00:19:24.130 "r_mbytes_per_sec": 0, 00:19:24.130 "w_mbytes_per_sec": 0 00:19:24.130 }, 00:19:24.130 "claimed": false, 00:19:24.130 "zoned": false, 00:19:24.130 "supported_io_types": { 00:19:24.130 "read": true, 00:19:24.130 "write": true, 00:19:24.130 "unmap": true, 00:19:24.130 "flush": true, 00:19:24.130 "reset": true, 00:19:24.130 "nvme_admin": false, 00:19:24.130 "nvme_io": false, 00:19:24.130 "nvme_io_md": false, 00:19:24.130 "write_zeroes": true, 00:19:24.130 "zcopy": false, 00:19:24.130 "get_zone_info": false, 00:19:24.130 "zone_management": false, 00:19:24.130 "zone_append": false, 00:19:24.130 "compare": false, 00:19:24.130 "compare_and_write": false, 00:19:24.130 "abort": false, 00:19:24.130 "seek_hole": false, 00:19:24.130 "seek_data": false, 00:19:24.130 "copy": false, 00:19:24.130 "nvme_iov_md": false 00:19:24.130 }, 00:19:24.130 "memory_domains": [ 00:19:24.130 { 00:19:24.130 "dma_device_id": "system", 00:19:24.130 "dma_device_type": 1 00:19:24.130 }, 00:19:24.130 { 00:19:24.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.130 "dma_device_type": 2 00:19:24.130 }, 00:19:24.130 { 00:19:24.130 "dma_device_id": "system", 00:19:24.130 "dma_device_type": 1 00:19:24.130 }, 00:19:24.130 { 00:19:24.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.130 "dma_device_type": 2 00:19:24.130 }, 00:19:24.130 { 00:19:24.130 "dma_device_id": "system", 00:19:24.130 "dma_device_type": 1 00:19:24.130 }, 00:19:24.130 { 00:19:24.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.130 "dma_device_type": 2 00:19:24.130 } 00:19:24.130 ], 00:19:24.130 "driver_specific": { 00:19:24.130 "raid": { 00:19:24.130 "uuid": "42e9ba0c-71ca-477c-81dc-be59ad1dd1af", 00:19:24.130 "strip_size_kb": 64, 00:19:24.130 "state": "online", 00:19:24.130 "raid_level": "concat", 00:19:24.130 "superblock": false, 00:19:24.130 "num_base_bdevs": 3, 00:19:24.130 "num_base_bdevs_discovered": 3, 00:19:24.130 "num_base_bdevs_operational": 3, 00:19:24.130 "base_bdevs_list": [ 00:19:24.130 { 00:19:24.130 "name": "NewBaseBdev", 00:19:24.130 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:24.130 "is_configured": true, 00:19:24.130 "data_offset": 0, 00:19:24.130 "data_size": 65536 00:19:24.130 }, 00:19:24.130 { 00:19:24.130 "name": "BaseBdev2", 00:19:24.130 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:24.130 "is_configured": true, 00:19:24.130 "data_offset": 0, 00:19:24.130 "data_size": 65536 00:19:24.130 }, 00:19:24.130 { 00:19:24.130 "name": "BaseBdev3", 00:19:24.130 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:24.130 "is_configured": true, 00:19:24.130 "data_offset": 0, 00:19:24.130 "data_size": 65536 00:19:24.130 } 00:19:24.130 ] 00:19:24.130 } 00:19:24.130 } 00:19:24.130 }' 00:19:24.130 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:24.130 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:24.130 BaseBdev2 00:19:24.130 BaseBdev3' 00:19:24.130 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.130 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:24.130 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:24.389 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:24.389 "name": "NewBaseBdev", 00:19:24.389 "aliases": [ 00:19:24.389 "02b97cf9-fcc2-4876-9bd2-1a8da2660d59" 00:19:24.389 ], 00:19:24.389 "product_name": "Malloc disk", 00:19:24.389 "block_size": 512, 00:19:24.389 "num_blocks": 65536, 00:19:24.389 "uuid": "02b97cf9-fcc2-4876-9bd2-1a8da2660d59", 00:19:24.389 "assigned_rate_limits": { 00:19:24.389 "rw_ios_per_sec": 0, 00:19:24.389 "rw_mbytes_per_sec": 0, 00:19:24.389 "r_mbytes_per_sec": 0, 00:19:24.389 "w_mbytes_per_sec": 0 00:19:24.389 }, 00:19:24.389 "claimed": true, 00:19:24.389 "claim_type": "exclusive_write", 00:19:24.389 "zoned": false, 00:19:24.389 "supported_io_types": { 00:19:24.389 "read": true, 00:19:24.389 "write": true, 00:19:24.389 "unmap": true, 00:19:24.389 "flush": true, 00:19:24.389 "reset": true, 00:19:24.389 "nvme_admin": false, 00:19:24.389 "nvme_io": false, 00:19:24.389 "nvme_io_md": false, 00:19:24.389 "write_zeroes": true, 00:19:24.389 "zcopy": true, 00:19:24.389 "get_zone_info": false, 00:19:24.389 "zone_management": false, 00:19:24.389 "zone_append": false, 00:19:24.389 "compare": false, 00:19:24.389 "compare_and_write": false, 00:19:24.389 "abort": true, 00:19:24.389 "seek_hole": false, 00:19:24.389 "seek_data": false, 00:19:24.389 "copy": true, 00:19:24.389 "nvme_iov_md": false 00:19:24.389 }, 00:19:24.389 "memory_domains": [ 00:19:24.389 { 00:19:24.389 "dma_device_id": "system", 00:19:24.389 "dma_device_type": 1 00:19:24.389 }, 00:19:24.389 { 00:19:24.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:24.389 "dma_device_type": 2 00:19:24.389 } 00:19:24.389 ], 00:19:24.389 "driver_specific": {} 00:19:24.389 }' 00:19:24.389 11:30:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.389 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:24.389 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:24.389 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.389 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:24.647 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:24.647 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.647 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:24.647 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:24.647 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.647 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:24.906 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:24.906 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:24.906 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:24.906 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.164 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.164 "name": "BaseBdev2", 00:19:25.164 "aliases": [ 00:19:25.164 "5a23f97b-ce64-4b5f-b17c-09ad5d61405d" 00:19:25.164 ], 00:19:25.164 "product_name": "Malloc disk", 00:19:25.164 "block_size": 512, 00:19:25.164 "num_blocks": 65536, 00:19:25.164 "uuid": "5a23f97b-ce64-4b5f-b17c-09ad5d61405d", 00:19:25.164 "assigned_rate_limits": { 00:19:25.164 "rw_ios_per_sec": 0, 00:19:25.164 "rw_mbytes_per_sec": 0, 00:19:25.164 "r_mbytes_per_sec": 0, 00:19:25.164 "w_mbytes_per_sec": 0 00:19:25.164 }, 00:19:25.164 "claimed": true, 00:19:25.164 "claim_type": "exclusive_write", 00:19:25.164 "zoned": false, 00:19:25.164 "supported_io_types": { 00:19:25.164 "read": true, 00:19:25.164 "write": true, 00:19:25.164 "unmap": true, 00:19:25.164 "flush": true, 00:19:25.164 "reset": true, 00:19:25.164 "nvme_admin": false, 00:19:25.164 "nvme_io": false, 00:19:25.164 "nvme_io_md": false, 00:19:25.164 "write_zeroes": true, 00:19:25.164 "zcopy": true, 00:19:25.164 "get_zone_info": false, 00:19:25.164 "zone_management": false, 00:19:25.164 "zone_append": false, 00:19:25.164 "compare": false, 00:19:25.164 "compare_and_write": false, 00:19:25.164 "abort": true, 00:19:25.164 "seek_hole": false, 00:19:25.164 "seek_data": false, 00:19:25.164 "copy": true, 00:19:25.164 "nvme_iov_md": false 00:19:25.164 }, 00:19:25.164 "memory_domains": [ 00:19:25.164 { 00:19:25.164 "dma_device_id": "system", 00:19:25.164 "dma_device_type": 1 00:19:25.164 }, 00:19:25.164 { 00:19:25.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.164 "dma_device_type": 2 00:19:25.164 } 00:19:25.164 ], 00:19:25.164 "driver_specific": {} 00:19:25.164 }' 00:19:25.164 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.164 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.164 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.164 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.164 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.422 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.422 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.422 11:30:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.422 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:25.422 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.422 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:25.422 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:25.422 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:25.422 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:25.422 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:25.681 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:25.681 "name": "BaseBdev3", 00:19:25.681 "aliases": [ 00:19:25.681 "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3" 00:19:25.681 ], 00:19:25.681 "product_name": "Malloc disk", 00:19:25.681 "block_size": 512, 00:19:25.681 "num_blocks": 65536, 00:19:25.681 "uuid": "e6bfa52a-adb1-4687-a0cd-edcc34c1a1b3", 00:19:25.681 "assigned_rate_limits": { 00:19:25.681 "rw_ios_per_sec": 0, 00:19:25.681 "rw_mbytes_per_sec": 0, 00:19:25.681 "r_mbytes_per_sec": 0, 00:19:25.681 "w_mbytes_per_sec": 0 00:19:25.681 }, 00:19:25.681 "claimed": true, 00:19:25.681 "claim_type": "exclusive_write", 00:19:25.681 "zoned": false, 00:19:25.681 "supported_io_types": { 00:19:25.681 "read": true, 00:19:25.681 "write": true, 00:19:25.681 "unmap": true, 00:19:25.681 "flush": true, 00:19:25.681 "reset": true, 00:19:25.681 "nvme_admin": false, 00:19:25.681 "nvme_io": false, 00:19:25.681 "nvme_io_md": false, 00:19:25.681 "write_zeroes": true, 00:19:25.681 "zcopy": true, 00:19:25.681 "get_zone_info": false, 00:19:25.681 "zone_management": false, 00:19:25.681 "zone_append": false, 00:19:25.681 "compare": false, 00:19:25.681 "compare_and_write": false, 00:19:25.681 "abort": true, 00:19:25.681 "seek_hole": false, 00:19:25.681 "seek_data": false, 00:19:25.681 "copy": true, 00:19:25.681 "nvme_iov_md": false 00:19:25.681 }, 00:19:25.681 "memory_domains": [ 00:19:25.681 { 00:19:25.681 "dma_device_id": "system", 00:19:25.681 "dma_device_type": 1 00:19:25.681 }, 00:19:25.681 { 00:19:25.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.681 "dma_device_type": 2 00:19:25.681 } 00:19:25.681 ], 00:19:25.681 "driver_specific": {} 00:19:25.681 }' 00:19:25.681 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.681 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:25.939 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:25.939 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.939 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:25.939 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:25.939 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:25.939 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:26.197 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:26.197 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.197 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:26.197 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:26.197 11:31:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:26.455 [2024-07-13 11:31:01.066195] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:26.455 [2024-07-13 11:31:01.066352] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:26.455 [2024-07-13 11:31:01.066532] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:26.455 [2024-07-13 11:31:01.066730] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:26.455 [2024-07-13 11:31:01.066843] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 128216 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 128216 ']' 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 128216 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 128216 00:19:26.455 killing process with pid 128216 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 128216' 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 128216 00:19:26.455 11:31:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 128216 00:19:26.455 [2024-07-13 11:31:01.103471] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:26.713 [2024-07-13 11:31:01.292301] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:27.648 ************************************ 00:19:27.648 END TEST raid_state_function_test 00:19:27.648 ************************************ 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:27.648 00:19:27.648 real 0m30.334s 00:19:27.648 user 0m57.174s 00:19:27.648 sys 0m3.164s 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.648 11:31:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:27.648 11:31:02 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:19:27.648 11:31:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:27.648 11:31:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.648 11:31:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:27.648 ************************************ 00:19:27.648 START TEST raid_state_function_test_sb 00:19:27.648 ************************************ 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:27.648 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=129244 00:19:27.649 Process raid pid: 129244 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 129244' 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 129244 /var/tmp/spdk-raid.sock 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 129244 ']' 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:27.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:27.649 11:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.649 [2024-07-13 11:31:02.356705] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:27.649 [2024-07-13 11:31:02.356922] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.907 [2024-07-13 11:31:02.531642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.165 [2024-07-13 11:31:02.728929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.165 [2024-07-13 11:31:02.895510] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:28.731 11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:28.731 11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:19:28.731 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:28.989 [2024-07-13 11:31:03.551951] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:28.989 [2024-07-13 11:31:03.552029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:28.989 [2024-07-13 11:31:03.552058] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:28.989 [2024-07-13 11:31:03.552085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:28.989 [2024-07-13 11:31:03.552093] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:28.989 [2024-07-13 11:31:03.552109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.989 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.247 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.247 "name": "Existed_Raid", 00:19:29.247 "uuid": "ba84e3e9-19dd-4355-a68d-4094ea9b51e7", 00:19:29.247 "strip_size_kb": 64, 00:19:29.247 "state": "configuring", 00:19:29.247 "raid_level": "concat", 00:19:29.247 "superblock": true, 00:19:29.247 "num_base_bdevs": 3, 00:19:29.247 "num_base_bdevs_discovered": 0, 00:19:29.247 "num_base_bdevs_operational": 3, 00:19:29.247 "base_bdevs_list": [ 00:19:29.247 { 00:19:29.247 "name": "BaseBdev1", 00:19:29.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.247 "is_configured": false, 00:19:29.247 "data_offset": 0, 00:19:29.247 "data_size": 0 00:19:29.247 }, 00:19:29.247 { 00:19:29.247 "name": "BaseBdev2", 00:19:29.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.247 "is_configured": false, 00:19:29.247 "data_offset": 0, 00:19:29.247 "data_size": 0 00:19:29.247 }, 00:19:29.247 { 00:19:29.247 "name": "BaseBdev3", 00:19:29.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.247 "is_configured": false, 00:19:29.247 "data_offset": 0, 00:19:29.247 "data_size": 0 00:19:29.247 } 00:19:29.247 ] 00:19:29.248 }' 00:19:29.248 11:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.248 11:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.813 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:30.071 [2024-07-13 11:31:04.663975] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:30.071 [2024-07-13 11:31:04.664007] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:30.071 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:30.328 [2024-07-13 11:31:04.924034] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:30.328 [2024-07-13 11:31:04.924087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:30.328 [2024-07-13 11:31:04.924114] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:30.328 [2024-07-13 11:31:04.924131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:30.328 [2024-07-13 11:31:04.924138] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:30.328 [2024-07-13 11:31:04.924163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:30.328 11:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:30.585 [2024-07-13 11:31:05.146512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.585 BaseBdev1 00:19:30.585 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:30.585 11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:30.585 11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:30.585 11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:30.585 11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:30.585 11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:30.585 11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.843 11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:30.843 [ 00:19:30.843 { 00:19:30.843 "name": "BaseBdev1", 00:19:30.843 "aliases": [ 00:19:30.843 "fcc15721-fe8c-46b7-8d07-469956d7339f" 00:19:30.843 ], 00:19:30.843 "product_name": "Malloc disk", 00:19:30.843 "block_size": 512, 00:19:30.843 "num_blocks": 65536, 00:19:30.843 "uuid": "fcc15721-fe8c-46b7-8d07-469956d7339f", 00:19:30.843 "assigned_rate_limits": { 00:19:30.843 "rw_ios_per_sec": 0, 00:19:30.843 "rw_mbytes_per_sec": 0, 00:19:30.843 "r_mbytes_per_sec": 0, 00:19:30.843 "w_mbytes_per_sec": 0 00:19:30.843 }, 00:19:30.843 "claimed": true, 00:19:30.843 "claim_type": "exclusive_write", 00:19:30.843 "zoned": false, 00:19:30.843 "supported_io_types": { 00:19:30.843 "read": true, 00:19:30.843 "write": true, 00:19:30.843 "unmap": true, 00:19:30.843 "flush": true, 00:19:30.843 "reset": true, 00:19:30.843 "nvme_admin": false, 00:19:30.843 "nvme_io": false, 00:19:30.843 "nvme_io_md": false, 00:19:30.843 "write_zeroes": true, 00:19:30.843 "zcopy": true, 00:19:30.843 "get_zone_info": false, 00:19:30.843 "zone_management": false, 00:19:30.844 "zone_append": false, 00:19:30.844 "compare": false, 00:19:30.844 "compare_and_write": false, 00:19:30.844 "abort": true, 00:19:30.844 "seek_hole": false, 00:19:30.844 "seek_data": false, 00:19:30.844 "copy": true, 00:19:30.844 "nvme_iov_md": false 00:19:30.844 }, 00:19:30.844 "memory_domains": [ 00:19:30.844 { 00:19:30.844 "dma_device_id": "system", 00:19:30.844 "dma_device_type": 1 00:19:30.844 }, 00:19:30.844 { 00:19:30.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.844 "dma_device_type": 2 00:19:30.844 } 00:19:30.844 ], 00:19:30.844 "driver_specific": {} 00:19:30.844 } 00:19:30.844 ] 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:31.112 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:31.112 "name": "Existed_Raid", 00:19:31.112 "uuid": "ce95fc7a-da85-44d5-998e-920fb20af783", 00:19:31.112 "strip_size_kb": 64, 00:19:31.112 "state": "configuring", 00:19:31.112 "raid_level": "concat", 00:19:31.112 "superblock": true, 00:19:31.112 "num_base_bdevs": 3, 00:19:31.112 "num_base_bdevs_discovered": 1, 00:19:31.112 "num_base_bdevs_operational": 3, 00:19:31.112 "base_bdevs_list": [ 00:19:31.112 { 00:19:31.112 "name": "BaseBdev1", 00:19:31.112 "uuid": "fcc15721-fe8c-46b7-8d07-469956d7339f", 00:19:31.112 "is_configured": true, 00:19:31.112 "data_offset": 2048, 00:19:31.112 "data_size": 63488 00:19:31.112 }, 00:19:31.112 { 00:19:31.112 "name": "BaseBdev2", 00:19:31.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.112 "is_configured": false, 00:19:31.112 "data_offset": 0, 00:19:31.112 "data_size": 0 00:19:31.112 }, 00:19:31.112 { 00:19:31.112 "name": "BaseBdev3", 00:19:31.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.113 "is_configured": false, 00:19:31.113 "data_offset": 0, 00:19:31.113 "data_size": 0 00:19:31.113 } 00:19:31.113 ] 00:19:31.113 }' 00:19:31.113 11:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:31.370 11:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.934 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:32.192 [2024-07-13 11:31:06.714813] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:32.192 [2024-07-13 11:31:06.714905] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:19:32.192 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:32.450 [2024-07-13 11:31:06.982913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:32.450 [2024-07-13 11:31:06.984790] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:32.450 [2024-07-13 11:31:06.984859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:32.450 [2024-07-13 11:31:06.984887] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:32.450 [2024-07-13 11:31:06.984925] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:32.450 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:32.451 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:32.451 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:32.451 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.451 11:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:32.451 11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:32.451 "name": "Existed_Raid", 00:19:32.451 "uuid": "867aa62b-5a44-408b-8088-73e0b5e99ec0", 00:19:32.451 "strip_size_kb": 64, 00:19:32.451 "state": "configuring", 00:19:32.451 "raid_level": "concat", 00:19:32.451 "superblock": true, 00:19:32.451 "num_base_bdevs": 3, 00:19:32.451 "num_base_bdevs_discovered": 1, 00:19:32.451 "num_base_bdevs_operational": 3, 00:19:32.451 "base_bdevs_list": [ 00:19:32.451 { 00:19:32.451 "name": "BaseBdev1", 00:19:32.451 "uuid": "fcc15721-fe8c-46b7-8d07-469956d7339f", 00:19:32.451 "is_configured": true, 00:19:32.451 "data_offset": 2048, 00:19:32.451 "data_size": 63488 00:19:32.451 }, 00:19:32.451 { 00:19:32.451 "name": "BaseBdev2", 00:19:32.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.451 "is_configured": false, 00:19:32.451 "data_offset": 0, 00:19:32.451 "data_size": 0 00:19:32.451 }, 00:19:32.451 { 00:19:32.451 "name": "BaseBdev3", 00:19:32.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.451 "is_configured": false, 00:19:32.451 "data_offset": 0, 00:19:32.451 "data_size": 0 00:19:32.451 } 00:19:32.451 ] 00:19:32.451 }' 00:19:32.451 11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:32.451 11:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.386 11:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:33.386 [2024-07-13 11:31:08.111281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:33.386 BaseBdev2 00:19:33.386 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:33.386 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:33.386 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:33.386 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:33.386 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:33.386 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:33.386 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:33.645 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:33.904 [ 00:19:33.904 { 00:19:33.904 "name": "BaseBdev2", 00:19:33.904 "aliases": [ 00:19:33.904 "4ba10be9-0b08-4ad8-983e-3b25c892ed6a" 00:19:33.904 ], 00:19:33.904 "product_name": "Malloc disk", 00:19:33.904 "block_size": 512, 00:19:33.904 "num_blocks": 65536, 00:19:33.904 "uuid": "4ba10be9-0b08-4ad8-983e-3b25c892ed6a", 00:19:33.904 "assigned_rate_limits": { 00:19:33.904 "rw_ios_per_sec": 0, 00:19:33.904 "rw_mbytes_per_sec": 0, 00:19:33.904 "r_mbytes_per_sec": 0, 00:19:33.904 "w_mbytes_per_sec": 0 00:19:33.904 }, 00:19:33.904 "claimed": true, 00:19:33.904 "claim_type": "exclusive_write", 00:19:33.904 "zoned": false, 00:19:33.904 "supported_io_types": { 00:19:33.904 "read": true, 00:19:33.904 "write": true, 00:19:33.904 "unmap": true, 00:19:33.904 "flush": true, 00:19:33.904 "reset": true, 00:19:33.904 "nvme_admin": false, 00:19:33.904 "nvme_io": false, 00:19:33.904 "nvme_io_md": false, 00:19:33.904 "write_zeroes": true, 00:19:33.904 "zcopy": true, 00:19:33.904 "get_zone_info": false, 00:19:33.904 "zone_management": false, 00:19:33.904 "zone_append": false, 00:19:33.904 "compare": false, 00:19:33.904 "compare_and_write": false, 00:19:33.904 "abort": true, 00:19:33.904 "seek_hole": false, 00:19:33.904 "seek_data": false, 00:19:33.904 "copy": true, 00:19:33.904 "nvme_iov_md": false 00:19:33.904 }, 00:19:33.904 "memory_domains": [ 00:19:33.904 { 00:19:33.904 "dma_device_id": "system", 00:19:33.904 "dma_device_type": 1 00:19:33.904 }, 00:19:33.904 { 00:19:33.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.904 "dma_device_type": 2 00:19:33.904 } 00:19:33.904 ], 00:19:33.904 "driver_specific": {} 00:19:33.904 } 00:19:33.904 ] 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.904 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.163 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.163 "name": "Existed_Raid", 00:19:34.163 "uuid": "867aa62b-5a44-408b-8088-73e0b5e99ec0", 00:19:34.163 "strip_size_kb": 64, 00:19:34.163 "state": "configuring", 00:19:34.163 "raid_level": "concat", 00:19:34.163 "superblock": true, 00:19:34.163 "num_base_bdevs": 3, 00:19:34.163 "num_base_bdevs_discovered": 2, 00:19:34.163 "num_base_bdevs_operational": 3, 00:19:34.163 "base_bdevs_list": [ 00:19:34.163 { 00:19:34.163 "name": "BaseBdev1", 00:19:34.163 "uuid": "fcc15721-fe8c-46b7-8d07-469956d7339f", 00:19:34.163 "is_configured": true, 00:19:34.163 "data_offset": 2048, 00:19:34.163 "data_size": 63488 00:19:34.163 }, 00:19:34.163 { 00:19:34.163 "name": "BaseBdev2", 00:19:34.163 "uuid": "4ba10be9-0b08-4ad8-983e-3b25c892ed6a", 00:19:34.163 "is_configured": true, 00:19:34.163 "data_offset": 2048, 00:19:34.163 "data_size": 63488 00:19:34.163 }, 00:19:34.163 { 00:19:34.163 "name": "BaseBdev3", 00:19:34.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.163 "is_configured": false, 00:19:34.163 "data_offset": 0, 00:19:34.163 "data_size": 0 00:19:34.163 } 00:19:34.163 ] 00:19:34.163 }' 00:19:34.163 11:31:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.163 11:31:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:34.730 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:34.989 [2024-07-13 11:31:09.702910] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:34.989 [2024-07-13 11:31:09.703116] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:19:34.989 [2024-07-13 11:31:09.703130] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:34.989 BaseBdev3 00:19:34.989 [2024-07-13 11:31:09.703297] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:34.989 [2024-07-13 11:31:09.703609] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:19:34.989 [2024-07-13 11:31:09.703634] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:19:34.989 [2024-07-13 11:31:09.703776] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.989 11:31:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:34.989 11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:34.989 11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:34.989 11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:34.989 11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:34.989 11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:34.989 11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:35.247 11:31:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:35.506 [ 00:19:35.506 { 00:19:35.506 "name": "BaseBdev3", 00:19:35.506 "aliases": [ 00:19:35.506 "8f3154ba-31f0-4132-bcbf-65c3129ad0d7" 00:19:35.506 ], 00:19:35.506 "product_name": "Malloc disk", 00:19:35.506 "block_size": 512, 00:19:35.506 "num_blocks": 65536, 00:19:35.506 "uuid": "8f3154ba-31f0-4132-bcbf-65c3129ad0d7", 00:19:35.506 "assigned_rate_limits": { 00:19:35.506 "rw_ios_per_sec": 0, 00:19:35.506 "rw_mbytes_per_sec": 0, 00:19:35.506 "r_mbytes_per_sec": 0, 00:19:35.506 "w_mbytes_per_sec": 0 00:19:35.506 }, 00:19:35.506 "claimed": true, 00:19:35.506 "claim_type": "exclusive_write", 00:19:35.506 "zoned": false, 00:19:35.506 "supported_io_types": { 00:19:35.506 "read": true, 00:19:35.506 "write": true, 00:19:35.506 "unmap": true, 00:19:35.506 "flush": true, 00:19:35.506 "reset": true, 00:19:35.506 "nvme_admin": false, 00:19:35.506 "nvme_io": false, 00:19:35.506 "nvme_io_md": false, 00:19:35.506 "write_zeroes": true, 00:19:35.506 "zcopy": true, 00:19:35.506 "get_zone_info": false, 00:19:35.506 "zone_management": false, 00:19:35.506 "zone_append": false, 00:19:35.506 "compare": false, 00:19:35.506 "compare_and_write": false, 00:19:35.506 "abort": true, 00:19:35.506 "seek_hole": false, 00:19:35.506 "seek_data": false, 00:19:35.506 "copy": true, 00:19:35.506 "nvme_iov_md": false 00:19:35.506 }, 00:19:35.506 "memory_domains": [ 00:19:35.506 { 00:19:35.506 "dma_device_id": "system", 00:19:35.506 "dma_device_type": 1 00:19:35.506 }, 00:19:35.506 { 00:19:35.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.506 "dma_device_type": 2 00:19:35.506 } 00:19:35.506 ], 00:19:35.506 "driver_specific": {} 00:19:35.506 } 00:19:35.506 ] 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.506 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.765 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:35.765 "name": "Existed_Raid", 00:19:35.765 "uuid": "867aa62b-5a44-408b-8088-73e0b5e99ec0", 00:19:35.765 "strip_size_kb": 64, 00:19:35.765 "state": "online", 00:19:35.765 "raid_level": "concat", 00:19:35.765 "superblock": true, 00:19:35.765 "num_base_bdevs": 3, 00:19:35.765 "num_base_bdevs_discovered": 3, 00:19:35.765 "num_base_bdevs_operational": 3, 00:19:35.765 "base_bdevs_list": [ 00:19:35.765 { 00:19:35.765 "name": "BaseBdev1", 00:19:35.765 "uuid": "fcc15721-fe8c-46b7-8d07-469956d7339f", 00:19:35.765 "is_configured": true, 00:19:35.765 "data_offset": 2048, 00:19:35.765 "data_size": 63488 00:19:35.765 }, 00:19:35.765 { 00:19:35.765 "name": "BaseBdev2", 00:19:35.765 "uuid": "4ba10be9-0b08-4ad8-983e-3b25c892ed6a", 00:19:35.765 "is_configured": true, 00:19:35.765 "data_offset": 2048, 00:19:35.765 "data_size": 63488 00:19:35.765 }, 00:19:35.765 { 00:19:35.765 "name": "BaseBdev3", 00:19:35.765 "uuid": "8f3154ba-31f0-4132-bcbf-65c3129ad0d7", 00:19:35.765 "is_configured": true, 00:19:35.765 "data_offset": 2048, 00:19:35.765 "data_size": 63488 00:19:35.765 } 00:19:35.765 ] 00:19:35.765 }' 00:19:35.765 11:31:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:35.765 11:31:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.332 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:36.332 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:36.332 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:36.332 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:36.332 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:36.332 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:36.332 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:36.332 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:36.591 [2024-07-13 11:31:11.235464] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:36.591 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:36.591 "name": "Existed_Raid", 00:19:36.591 "aliases": [ 00:19:36.591 "867aa62b-5a44-408b-8088-73e0b5e99ec0" 00:19:36.591 ], 00:19:36.591 "product_name": "Raid Volume", 00:19:36.591 "block_size": 512, 00:19:36.591 "num_blocks": 190464, 00:19:36.591 "uuid": "867aa62b-5a44-408b-8088-73e0b5e99ec0", 00:19:36.591 "assigned_rate_limits": { 00:19:36.591 "rw_ios_per_sec": 0, 00:19:36.591 "rw_mbytes_per_sec": 0, 00:19:36.591 "r_mbytes_per_sec": 0, 00:19:36.591 "w_mbytes_per_sec": 0 00:19:36.591 }, 00:19:36.591 "claimed": false, 00:19:36.591 "zoned": false, 00:19:36.591 "supported_io_types": { 00:19:36.591 "read": true, 00:19:36.591 "write": true, 00:19:36.591 "unmap": true, 00:19:36.591 "flush": true, 00:19:36.591 "reset": true, 00:19:36.591 "nvme_admin": false, 00:19:36.591 "nvme_io": false, 00:19:36.591 "nvme_io_md": false, 00:19:36.591 "write_zeroes": true, 00:19:36.591 "zcopy": false, 00:19:36.591 "get_zone_info": false, 00:19:36.591 "zone_management": false, 00:19:36.591 "zone_append": false, 00:19:36.591 "compare": false, 00:19:36.591 "compare_and_write": false, 00:19:36.591 "abort": false, 00:19:36.591 "seek_hole": false, 00:19:36.591 "seek_data": false, 00:19:36.591 "copy": false, 00:19:36.591 "nvme_iov_md": false 00:19:36.591 }, 00:19:36.591 "memory_domains": [ 00:19:36.591 { 00:19:36.591 "dma_device_id": "system", 00:19:36.591 "dma_device_type": 1 00:19:36.591 }, 00:19:36.591 { 00:19:36.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.591 "dma_device_type": 2 00:19:36.591 }, 00:19:36.591 { 00:19:36.591 "dma_device_id": "system", 00:19:36.591 "dma_device_type": 1 00:19:36.591 }, 00:19:36.591 { 00:19:36.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.591 "dma_device_type": 2 00:19:36.591 }, 00:19:36.591 { 00:19:36.591 "dma_device_id": "system", 00:19:36.591 "dma_device_type": 1 00:19:36.591 }, 00:19:36.591 { 00:19:36.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.591 "dma_device_type": 2 00:19:36.591 } 00:19:36.591 ], 00:19:36.591 "driver_specific": { 00:19:36.591 "raid": { 00:19:36.591 "uuid": "867aa62b-5a44-408b-8088-73e0b5e99ec0", 00:19:36.591 "strip_size_kb": 64, 00:19:36.591 "state": "online", 00:19:36.591 "raid_level": "concat", 00:19:36.591 "superblock": true, 00:19:36.591 "num_base_bdevs": 3, 00:19:36.591 "num_base_bdevs_discovered": 3, 00:19:36.591 "num_base_bdevs_operational": 3, 00:19:36.591 "base_bdevs_list": [ 00:19:36.591 { 00:19:36.591 "name": "BaseBdev1", 00:19:36.591 "uuid": "fcc15721-fe8c-46b7-8d07-469956d7339f", 00:19:36.591 "is_configured": true, 00:19:36.591 "data_offset": 2048, 00:19:36.591 "data_size": 63488 00:19:36.591 }, 00:19:36.591 { 00:19:36.591 "name": "BaseBdev2", 00:19:36.591 "uuid": "4ba10be9-0b08-4ad8-983e-3b25c892ed6a", 00:19:36.591 "is_configured": true, 00:19:36.591 "data_offset": 2048, 00:19:36.591 "data_size": 63488 00:19:36.591 }, 00:19:36.591 { 00:19:36.591 "name": "BaseBdev3", 00:19:36.591 "uuid": "8f3154ba-31f0-4132-bcbf-65c3129ad0d7", 00:19:36.591 "is_configured": true, 00:19:36.591 "data_offset": 2048, 00:19:36.591 "data_size": 63488 00:19:36.591 } 00:19:36.591 ] 00:19:36.591 } 00:19:36.591 } 00:19:36.591 }' 00:19:36.591 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:36.591 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:36.591 BaseBdev2 00:19:36.591 BaseBdev3' 00:19:36.591 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:36.591 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:36.591 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:36.850 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:36.850 "name": "BaseBdev1", 00:19:36.850 "aliases": [ 00:19:36.850 "fcc15721-fe8c-46b7-8d07-469956d7339f" 00:19:36.850 ], 00:19:36.850 "product_name": "Malloc disk", 00:19:36.850 "block_size": 512, 00:19:36.850 "num_blocks": 65536, 00:19:36.850 "uuid": "fcc15721-fe8c-46b7-8d07-469956d7339f", 00:19:36.850 "assigned_rate_limits": { 00:19:36.850 "rw_ios_per_sec": 0, 00:19:36.850 "rw_mbytes_per_sec": 0, 00:19:36.850 "r_mbytes_per_sec": 0, 00:19:36.850 "w_mbytes_per_sec": 0 00:19:36.850 }, 00:19:36.850 "claimed": true, 00:19:36.850 "claim_type": "exclusive_write", 00:19:36.850 "zoned": false, 00:19:36.850 "supported_io_types": { 00:19:36.850 "read": true, 00:19:36.850 "write": true, 00:19:36.850 "unmap": true, 00:19:36.850 "flush": true, 00:19:36.850 "reset": true, 00:19:36.850 "nvme_admin": false, 00:19:36.850 "nvme_io": false, 00:19:36.850 "nvme_io_md": false, 00:19:36.850 "write_zeroes": true, 00:19:36.850 "zcopy": true, 00:19:36.850 "get_zone_info": false, 00:19:36.850 "zone_management": false, 00:19:36.850 "zone_append": false, 00:19:36.850 "compare": false, 00:19:36.850 "compare_and_write": false, 00:19:36.850 "abort": true, 00:19:36.850 "seek_hole": false, 00:19:36.850 "seek_data": false, 00:19:36.850 "copy": true, 00:19:36.850 "nvme_iov_md": false 00:19:36.850 }, 00:19:36.850 "memory_domains": [ 00:19:36.850 { 00:19:36.850 "dma_device_id": "system", 00:19:36.850 "dma_device_type": 1 00:19:36.850 }, 00:19:36.850 { 00:19:36.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.850 "dma_device_type": 2 00:19:36.850 } 00:19:36.850 ], 00:19:36.850 "driver_specific": {} 00:19:36.850 }' 00:19:36.850 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:36.850 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.109 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.109 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.109 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.109 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.109 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.109 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.109 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.367 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.367 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.367 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.367 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.367 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:37.367 11:31:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:37.625 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:37.625 "name": "BaseBdev2", 00:19:37.625 "aliases": [ 00:19:37.625 "4ba10be9-0b08-4ad8-983e-3b25c892ed6a" 00:19:37.625 ], 00:19:37.625 "product_name": "Malloc disk", 00:19:37.625 "block_size": 512, 00:19:37.625 "num_blocks": 65536, 00:19:37.625 "uuid": "4ba10be9-0b08-4ad8-983e-3b25c892ed6a", 00:19:37.625 "assigned_rate_limits": { 00:19:37.625 "rw_ios_per_sec": 0, 00:19:37.625 "rw_mbytes_per_sec": 0, 00:19:37.625 "r_mbytes_per_sec": 0, 00:19:37.625 "w_mbytes_per_sec": 0 00:19:37.625 }, 00:19:37.625 "claimed": true, 00:19:37.625 "claim_type": "exclusive_write", 00:19:37.625 "zoned": false, 00:19:37.625 "supported_io_types": { 00:19:37.625 "read": true, 00:19:37.625 "write": true, 00:19:37.625 "unmap": true, 00:19:37.625 "flush": true, 00:19:37.625 "reset": true, 00:19:37.625 "nvme_admin": false, 00:19:37.625 "nvme_io": false, 00:19:37.625 "nvme_io_md": false, 00:19:37.625 "write_zeroes": true, 00:19:37.625 "zcopy": true, 00:19:37.625 "get_zone_info": false, 00:19:37.625 "zone_management": false, 00:19:37.625 "zone_append": false, 00:19:37.625 "compare": false, 00:19:37.625 "compare_and_write": false, 00:19:37.625 "abort": true, 00:19:37.625 "seek_hole": false, 00:19:37.625 "seek_data": false, 00:19:37.625 "copy": true, 00:19:37.625 "nvme_iov_md": false 00:19:37.625 }, 00:19:37.625 "memory_domains": [ 00:19:37.625 { 00:19:37.625 "dma_device_id": "system", 00:19:37.625 "dma_device_type": 1 00:19:37.625 }, 00:19:37.625 { 00:19:37.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.625 "dma_device_type": 2 00:19:37.625 } 00:19:37.625 ], 00:19:37.625 "driver_specific": {} 00:19:37.625 }' 00:19:37.625 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.625 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:37.625 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:37.625 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.625 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:37.883 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:38.141 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:38.141 "name": "BaseBdev3", 00:19:38.141 "aliases": [ 00:19:38.141 "8f3154ba-31f0-4132-bcbf-65c3129ad0d7" 00:19:38.141 ], 00:19:38.141 "product_name": "Malloc disk", 00:19:38.141 "block_size": 512, 00:19:38.141 "num_blocks": 65536, 00:19:38.141 "uuid": "8f3154ba-31f0-4132-bcbf-65c3129ad0d7", 00:19:38.141 "assigned_rate_limits": { 00:19:38.141 "rw_ios_per_sec": 0, 00:19:38.141 "rw_mbytes_per_sec": 0, 00:19:38.142 "r_mbytes_per_sec": 0, 00:19:38.142 "w_mbytes_per_sec": 0 00:19:38.142 }, 00:19:38.142 "claimed": true, 00:19:38.142 "claim_type": "exclusive_write", 00:19:38.142 "zoned": false, 00:19:38.142 "supported_io_types": { 00:19:38.142 "read": true, 00:19:38.142 "write": true, 00:19:38.142 "unmap": true, 00:19:38.142 "flush": true, 00:19:38.142 "reset": true, 00:19:38.142 "nvme_admin": false, 00:19:38.142 "nvme_io": false, 00:19:38.142 "nvme_io_md": false, 00:19:38.142 "write_zeroes": true, 00:19:38.142 "zcopy": true, 00:19:38.142 "get_zone_info": false, 00:19:38.142 "zone_management": false, 00:19:38.142 "zone_append": false, 00:19:38.142 "compare": false, 00:19:38.142 "compare_and_write": false, 00:19:38.142 "abort": true, 00:19:38.142 "seek_hole": false, 00:19:38.142 "seek_data": false, 00:19:38.142 "copy": true, 00:19:38.142 "nvme_iov_md": false 00:19:38.142 }, 00:19:38.142 "memory_domains": [ 00:19:38.142 { 00:19:38.142 "dma_device_id": "system", 00:19:38.142 "dma_device_type": 1 00:19:38.142 }, 00:19:38.142 { 00:19:38.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:38.142 "dma_device_type": 2 00:19:38.142 } 00:19:38.142 ], 00:19:38.142 "driver_specific": {} 00:19:38.142 }' 00:19:38.142 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.399 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:38.399 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:38.399 11:31:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.399 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:38.399 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:38.399 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.399 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:38.657 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:38.657 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.657 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:38.657 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:38.657 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:38.915 [2024-07-13 11:31:13.503707] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:38.915 [2024-07-13 11:31:13.503732] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.915 [2024-07-13 11:31:13.503790] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.915 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.173 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.173 "name": "Existed_Raid", 00:19:39.173 "uuid": "867aa62b-5a44-408b-8088-73e0b5e99ec0", 00:19:39.173 "strip_size_kb": 64, 00:19:39.173 "state": "offline", 00:19:39.173 "raid_level": "concat", 00:19:39.173 "superblock": true, 00:19:39.173 "num_base_bdevs": 3, 00:19:39.173 "num_base_bdevs_discovered": 2, 00:19:39.173 "num_base_bdevs_operational": 2, 00:19:39.173 "base_bdevs_list": [ 00:19:39.173 { 00:19:39.173 "name": null, 00:19:39.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.173 "is_configured": false, 00:19:39.173 "data_offset": 2048, 00:19:39.173 "data_size": 63488 00:19:39.173 }, 00:19:39.173 { 00:19:39.173 "name": "BaseBdev2", 00:19:39.173 "uuid": "4ba10be9-0b08-4ad8-983e-3b25c892ed6a", 00:19:39.173 "is_configured": true, 00:19:39.173 "data_offset": 2048, 00:19:39.173 "data_size": 63488 00:19:39.173 }, 00:19:39.173 { 00:19:39.173 "name": "BaseBdev3", 00:19:39.173 "uuid": "8f3154ba-31f0-4132-bcbf-65c3129ad0d7", 00:19:39.173 "is_configured": true, 00:19:39.173 "data_offset": 2048, 00:19:39.173 "data_size": 63488 00:19:39.173 } 00:19:39.173 ] 00:19:39.173 }' 00:19:39.173 11:31:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.173 11:31:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:40.128 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:40.128 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:40.128 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.128 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:40.128 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:40.128 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:40.128 11:31:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:40.396 [2024-07-13 11:31:15.038618] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:40.396 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:40.396 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:40.396 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.396 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:40.654 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:40.654 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:40.654 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:40.913 [2024-07-13 11:31:15.561744] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:40.913 [2024-07-13 11:31:15.561801] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:19:40.913 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:40.913 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:40.913 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.913 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:41.171 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:41.171 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:41.171 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:41.171 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:41.171 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:41.171 11:31:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:41.430 BaseBdev2 00:19:41.430 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:41.430 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:41.430 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:41.430 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:41.430 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:41.430 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:41.430 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:41.688 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:41.947 [ 00:19:41.947 { 00:19:41.947 "name": "BaseBdev2", 00:19:41.947 "aliases": [ 00:19:41.947 "66df3887-0482-49a3-82fe-ae20a58ddbf1" 00:19:41.947 ], 00:19:41.947 "product_name": "Malloc disk", 00:19:41.947 "block_size": 512, 00:19:41.947 "num_blocks": 65536, 00:19:41.947 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:41.947 "assigned_rate_limits": { 00:19:41.947 "rw_ios_per_sec": 0, 00:19:41.947 "rw_mbytes_per_sec": 0, 00:19:41.947 "r_mbytes_per_sec": 0, 00:19:41.947 "w_mbytes_per_sec": 0 00:19:41.947 }, 00:19:41.947 "claimed": false, 00:19:41.947 "zoned": false, 00:19:41.947 "supported_io_types": { 00:19:41.947 "read": true, 00:19:41.947 "write": true, 00:19:41.947 "unmap": true, 00:19:41.947 "flush": true, 00:19:41.947 "reset": true, 00:19:41.947 "nvme_admin": false, 00:19:41.947 "nvme_io": false, 00:19:41.947 "nvme_io_md": false, 00:19:41.947 "write_zeroes": true, 00:19:41.947 "zcopy": true, 00:19:41.947 "get_zone_info": false, 00:19:41.947 "zone_management": false, 00:19:41.947 "zone_append": false, 00:19:41.947 "compare": false, 00:19:41.947 "compare_and_write": false, 00:19:41.947 "abort": true, 00:19:41.947 "seek_hole": false, 00:19:41.947 "seek_data": false, 00:19:41.947 "copy": true, 00:19:41.947 "nvme_iov_md": false 00:19:41.947 }, 00:19:41.947 "memory_domains": [ 00:19:41.947 { 00:19:41.947 "dma_device_id": "system", 00:19:41.947 "dma_device_type": 1 00:19:41.947 }, 00:19:41.947 { 00:19:41.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.947 "dma_device_type": 2 00:19:41.947 } 00:19:41.947 ], 00:19:41.947 "driver_specific": {} 00:19:41.947 } 00:19:41.947 ] 00:19:41.947 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:41.947 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:41.947 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:41.947 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:42.205 BaseBdev3 00:19:42.205 11:31:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:42.205 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:19:42.205 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:42.205 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:42.205 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:42.205 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:42.205 11:31:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:42.464 11:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:42.729 [ 00:19:42.729 { 00:19:42.729 "name": "BaseBdev3", 00:19:42.729 "aliases": [ 00:19:42.729 "2735a186-49a0-443e-b44c-452fb312efea" 00:19:42.729 ], 00:19:42.729 "product_name": "Malloc disk", 00:19:42.729 "block_size": 512, 00:19:42.729 "num_blocks": 65536, 00:19:42.729 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:42.729 "assigned_rate_limits": { 00:19:42.729 "rw_ios_per_sec": 0, 00:19:42.729 "rw_mbytes_per_sec": 0, 00:19:42.729 "r_mbytes_per_sec": 0, 00:19:42.729 "w_mbytes_per_sec": 0 00:19:42.729 }, 00:19:42.729 "claimed": false, 00:19:42.729 "zoned": false, 00:19:42.729 "supported_io_types": { 00:19:42.729 "read": true, 00:19:42.729 "write": true, 00:19:42.729 "unmap": true, 00:19:42.729 "flush": true, 00:19:42.729 "reset": true, 00:19:42.729 "nvme_admin": false, 00:19:42.729 "nvme_io": false, 00:19:42.729 "nvme_io_md": false, 00:19:42.729 "write_zeroes": true, 00:19:42.729 "zcopy": true, 00:19:42.729 "get_zone_info": false, 00:19:42.729 "zone_management": false, 00:19:42.729 "zone_append": false, 00:19:42.729 "compare": false, 00:19:42.729 "compare_and_write": false, 00:19:42.729 "abort": true, 00:19:42.729 "seek_hole": false, 00:19:42.729 "seek_data": false, 00:19:42.729 "copy": true, 00:19:42.729 "nvme_iov_md": false 00:19:42.729 }, 00:19:42.729 "memory_domains": [ 00:19:42.729 { 00:19:42.729 "dma_device_id": "system", 00:19:42.729 "dma_device_type": 1 00:19:42.729 }, 00:19:42.729 { 00:19:42.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.729 "dma_device_type": 2 00:19:42.729 } 00:19:42.729 ], 00:19:42.729 "driver_specific": {} 00:19:42.729 } 00:19:42.729 ] 00:19:42.729 11:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:42.729 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:42.729 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:42.729 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:42.729 [2024-07-13 11:31:17.463943] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:42.729 [2024-07-13 11:31:17.463999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:42.729 [2024-07-13 11:31:17.464037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.729 [2024-07-13 11:31:17.465600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.988 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.988 "name": "Existed_Raid", 00:19:42.988 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:42.988 "strip_size_kb": 64, 00:19:42.988 "state": "configuring", 00:19:42.988 "raid_level": "concat", 00:19:42.989 "superblock": true, 00:19:42.989 "num_base_bdevs": 3, 00:19:42.989 "num_base_bdevs_discovered": 2, 00:19:42.989 "num_base_bdevs_operational": 3, 00:19:42.989 "base_bdevs_list": [ 00:19:42.989 { 00:19:42.989 "name": "BaseBdev1", 00:19:42.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.989 "is_configured": false, 00:19:42.989 "data_offset": 0, 00:19:42.989 "data_size": 0 00:19:42.989 }, 00:19:42.989 { 00:19:42.989 "name": "BaseBdev2", 00:19:42.989 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:42.989 "is_configured": true, 00:19:42.989 "data_offset": 2048, 00:19:42.989 "data_size": 63488 00:19:42.989 }, 00:19:42.989 { 00:19:42.989 "name": "BaseBdev3", 00:19:42.989 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:42.989 "is_configured": true, 00:19:42.989 "data_offset": 2048, 00:19:42.989 "data_size": 63488 00:19:42.989 } 00:19:42.989 ] 00:19:42.989 }' 00:19:42.989 11:31:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.989 11:31:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:43.924 [2024-07-13 11:31:18.500059] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.924 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.182 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.182 "name": "Existed_Raid", 00:19:44.182 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:44.182 "strip_size_kb": 64, 00:19:44.182 "state": "configuring", 00:19:44.182 "raid_level": "concat", 00:19:44.182 "superblock": true, 00:19:44.182 "num_base_bdevs": 3, 00:19:44.182 "num_base_bdevs_discovered": 1, 00:19:44.182 "num_base_bdevs_operational": 3, 00:19:44.182 "base_bdevs_list": [ 00:19:44.182 { 00:19:44.182 "name": "BaseBdev1", 00:19:44.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.182 "is_configured": false, 00:19:44.182 "data_offset": 0, 00:19:44.182 "data_size": 0 00:19:44.182 }, 00:19:44.182 { 00:19:44.182 "name": null, 00:19:44.182 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:44.182 "is_configured": false, 00:19:44.182 "data_offset": 2048, 00:19:44.182 "data_size": 63488 00:19:44.182 }, 00:19:44.182 { 00:19:44.182 "name": "BaseBdev3", 00:19:44.182 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:44.182 "is_configured": true, 00:19:44.182 "data_offset": 2048, 00:19:44.182 "data_size": 63488 00:19:44.182 } 00:19:44.182 ] 00:19:44.182 }' 00:19:44.182 11:31:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.182 11:31:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.116 11:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.116 11:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:45.116 11:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:45.116 11:31:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:45.375 [2024-07-13 11:31:20.041702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:45.375 BaseBdev1 00:19:45.375 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:45.375 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:45.375 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:45.375 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:45.375 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:45.375 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:45.375 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:45.633 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:45.891 [ 00:19:45.891 { 00:19:45.891 "name": "BaseBdev1", 00:19:45.891 "aliases": [ 00:19:45.891 "7f7487ad-5474-40a9-bba4-e071264c83f6" 00:19:45.891 ], 00:19:45.891 "product_name": "Malloc disk", 00:19:45.891 "block_size": 512, 00:19:45.891 "num_blocks": 65536, 00:19:45.891 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:45.891 "assigned_rate_limits": { 00:19:45.891 "rw_ios_per_sec": 0, 00:19:45.891 "rw_mbytes_per_sec": 0, 00:19:45.891 "r_mbytes_per_sec": 0, 00:19:45.891 "w_mbytes_per_sec": 0 00:19:45.891 }, 00:19:45.892 "claimed": true, 00:19:45.892 "claim_type": "exclusive_write", 00:19:45.892 "zoned": false, 00:19:45.892 "supported_io_types": { 00:19:45.892 "read": true, 00:19:45.892 "write": true, 00:19:45.892 "unmap": true, 00:19:45.892 "flush": true, 00:19:45.892 "reset": true, 00:19:45.892 "nvme_admin": false, 00:19:45.892 "nvme_io": false, 00:19:45.892 "nvme_io_md": false, 00:19:45.892 "write_zeroes": true, 00:19:45.892 "zcopy": true, 00:19:45.892 "get_zone_info": false, 00:19:45.892 "zone_management": false, 00:19:45.892 "zone_append": false, 00:19:45.892 "compare": false, 00:19:45.892 "compare_and_write": false, 00:19:45.892 "abort": true, 00:19:45.892 "seek_hole": false, 00:19:45.892 "seek_data": false, 00:19:45.892 "copy": true, 00:19:45.892 "nvme_iov_md": false 00:19:45.892 }, 00:19:45.892 "memory_domains": [ 00:19:45.892 { 00:19:45.892 "dma_device_id": "system", 00:19:45.892 "dma_device_type": 1 00:19:45.892 }, 00:19:45.892 { 00:19:45.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:45.892 "dma_device_type": 2 00:19:45.892 } 00:19:45.892 ], 00:19:45.892 "driver_specific": {} 00:19:45.892 } 00:19:45.892 ] 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.892 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.150 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.150 "name": "Existed_Raid", 00:19:46.150 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:46.150 "strip_size_kb": 64, 00:19:46.150 "state": "configuring", 00:19:46.150 "raid_level": "concat", 00:19:46.150 "superblock": true, 00:19:46.150 "num_base_bdevs": 3, 00:19:46.150 "num_base_bdevs_discovered": 2, 00:19:46.150 "num_base_bdevs_operational": 3, 00:19:46.150 "base_bdevs_list": [ 00:19:46.150 { 00:19:46.150 "name": "BaseBdev1", 00:19:46.150 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:46.150 "is_configured": true, 00:19:46.150 "data_offset": 2048, 00:19:46.150 "data_size": 63488 00:19:46.150 }, 00:19:46.150 { 00:19:46.150 "name": null, 00:19:46.150 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:46.150 "is_configured": false, 00:19:46.150 "data_offset": 2048, 00:19:46.150 "data_size": 63488 00:19:46.150 }, 00:19:46.150 { 00:19:46.150 "name": "BaseBdev3", 00:19:46.150 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:46.150 "is_configured": true, 00:19:46.150 "data_offset": 2048, 00:19:46.150 "data_size": 63488 00:19:46.150 } 00:19:46.150 ] 00:19:46.150 }' 00:19:46.150 11:31:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.150 11:31:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.715 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.715 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:46.974 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:46.974 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:47.232 [2024-07-13 11:31:21.738028] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.232 "name": "Existed_Raid", 00:19:47.232 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:47.232 "strip_size_kb": 64, 00:19:47.232 "state": "configuring", 00:19:47.232 "raid_level": "concat", 00:19:47.232 "superblock": true, 00:19:47.232 "num_base_bdevs": 3, 00:19:47.232 "num_base_bdevs_discovered": 1, 00:19:47.232 "num_base_bdevs_operational": 3, 00:19:47.232 "base_bdevs_list": [ 00:19:47.232 { 00:19:47.232 "name": "BaseBdev1", 00:19:47.232 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:47.232 "is_configured": true, 00:19:47.232 "data_offset": 2048, 00:19:47.232 "data_size": 63488 00:19:47.232 }, 00:19:47.232 { 00:19:47.232 "name": null, 00:19:47.232 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:47.232 "is_configured": false, 00:19:47.232 "data_offset": 2048, 00:19:47.232 "data_size": 63488 00:19:47.232 }, 00:19:47.232 { 00:19:47.232 "name": null, 00:19:47.232 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:47.232 "is_configured": false, 00:19:47.232 "data_offset": 2048, 00:19:47.232 "data_size": 63488 00:19:47.232 } 00:19:47.232 ] 00:19:47.232 }' 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.232 11:31:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.165 11:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.165 11:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:48.165 11:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:48.165 11:31:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:48.423 [2024-07-13 11:31:23.054247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.423 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:48.680 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:48.680 "name": "Existed_Raid", 00:19:48.680 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:48.680 "strip_size_kb": 64, 00:19:48.680 "state": "configuring", 00:19:48.680 "raid_level": "concat", 00:19:48.680 "superblock": true, 00:19:48.680 "num_base_bdevs": 3, 00:19:48.680 "num_base_bdevs_discovered": 2, 00:19:48.680 "num_base_bdevs_operational": 3, 00:19:48.680 "base_bdevs_list": [ 00:19:48.680 { 00:19:48.680 "name": "BaseBdev1", 00:19:48.680 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:48.680 "is_configured": true, 00:19:48.680 "data_offset": 2048, 00:19:48.680 "data_size": 63488 00:19:48.680 }, 00:19:48.680 { 00:19:48.680 "name": null, 00:19:48.680 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:48.680 "is_configured": false, 00:19:48.680 "data_offset": 2048, 00:19:48.680 "data_size": 63488 00:19:48.680 }, 00:19:48.680 { 00:19:48.680 "name": "BaseBdev3", 00:19:48.680 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:48.680 "is_configured": true, 00:19:48.680 "data_offset": 2048, 00:19:48.680 "data_size": 63488 00:19:48.680 } 00:19:48.680 ] 00:19:48.680 }' 00:19:48.680 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:48.680 11:31:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.245 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.245 11:31:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:49.503 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:49.503 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:49.761 [2024-07-13 11:31:24.382534] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.761 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:50.019 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:50.019 "name": "Existed_Raid", 00:19:50.019 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:50.019 "strip_size_kb": 64, 00:19:50.019 "state": "configuring", 00:19:50.019 "raid_level": "concat", 00:19:50.019 "superblock": true, 00:19:50.019 "num_base_bdevs": 3, 00:19:50.019 "num_base_bdevs_discovered": 1, 00:19:50.019 "num_base_bdevs_operational": 3, 00:19:50.019 "base_bdevs_list": [ 00:19:50.019 { 00:19:50.019 "name": null, 00:19:50.019 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:50.019 "is_configured": false, 00:19:50.019 "data_offset": 2048, 00:19:50.019 "data_size": 63488 00:19:50.019 }, 00:19:50.019 { 00:19:50.019 "name": null, 00:19:50.019 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:50.019 "is_configured": false, 00:19:50.019 "data_offset": 2048, 00:19:50.019 "data_size": 63488 00:19:50.019 }, 00:19:50.019 { 00:19:50.019 "name": "BaseBdev3", 00:19:50.019 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:50.019 "is_configured": true, 00:19:50.019 "data_offset": 2048, 00:19:50.019 "data_size": 63488 00:19:50.019 } 00:19:50.019 ] 00:19:50.019 }' 00:19:50.019 11:31:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:50.019 11:31:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.584 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.584 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:50.843 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:50.843 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:51.101 [2024-07-13 11:31:25.802330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:51.101 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:51.102 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.102 11:31:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:51.360 11:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:51.360 "name": "Existed_Raid", 00:19:51.360 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:51.360 "strip_size_kb": 64, 00:19:51.360 "state": "configuring", 00:19:51.360 "raid_level": "concat", 00:19:51.360 "superblock": true, 00:19:51.360 "num_base_bdevs": 3, 00:19:51.360 "num_base_bdevs_discovered": 2, 00:19:51.360 "num_base_bdevs_operational": 3, 00:19:51.360 "base_bdevs_list": [ 00:19:51.360 { 00:19:51.360 "name": null, 00:19:51.360 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:51.360 "is_configured": false, 00:19:51.360 "data_offset": 2048, 00:19:51.360 "data_size": 63488 00:19:51.360 }, 00:19:51.360 { 00:19:51.360 "name": "BaseBdev2", 00:19:51.360 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:51.360 "is_configured": true, 00:19:51.360 "data_offset": 2048, 00:19:51.360 "data_size": 63488 00:19:51.360 }, 00:19:51.360 { 00:19:51.360 "name": "BaseBdev3", 00:19:51.360 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:51.360 "is_configured": true, 00:19:51.360 "data_offset": 2048, 00:19:51.360 "data_size": 63488 00:19:51.360 } 00:19:51.360 ] 00:19:51.360 }' 00:19:51.360 11:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:51.360 11:31:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.927 11:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.927 11:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:52.185 11:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:52.185 11:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.185 11:31:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:52.444 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7f7487ad-5474-40a9-bba4-e071264c83f6 00:19:52.703 [2024-07-13 11:31:27.370627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:52.703 [2024-07-13 11:31:27.370843] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:19:52.703 [2024-07-13 11:31:27.370869] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:52.703 [2024-07-13 11:31:27.370975] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:19:52.703 NewBaseBdev 00:19:52.703 [2024-07-13 11:31:27.371273] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:19:52.703 [2024-07-13 11:31:27.371287] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:19:52.703 [2024-07-13 11:31:27.371409] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.703 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:52.703 11:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:52.703 11:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:52.703 11:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:52.703 11:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:52.703 11:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:52.703 11:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:52.962 11:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:53.221 [ 00:19:53.221 { 00:19:53.221 "name": "NewBaseBdev", 00:19:53.221 "aliases": [ 00:19:53.221 "7f7487ad-5474-40a9-bba4-e071264c83f6" 00:19:53.221 ], 00:19:53.221 "product_name": "Malloc disk", 00:19:53.221 "block_size": 512, 00:19:53.221 "num_blocks": 65536, 00:19:53.221 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:53.221 "assigned_rate_limits": { 00:19:53.221 "rw_ios_per_sec": 0, 00:19:53.221 "rw_mbytes_per_sec": 0, 00:19:53.221 "r_mbytes_per_sec": 0, 00:19:53.221 "w_mbytes_per_sec": 0 00:19:53.221 }, 00:19:53.221 "claimed": true, 00:19:53.221 "claim_type": "exclusive_write", 00:19:53.221 "zoned": false, 00:19:53.221 "supported_io_types": { 00:19:53.221 "read": true, 00:19:53.221 "write": true, 00:19:53.221 "unmap": true, 00:19:53.221 "flush": true, 00:19:53.221 "reset": true, 00:19:53.221 "nvme_admin": false, 00:19:53.221 "nvme_io": false, 00:19:53.221 "nvme_io_md": false, 00:19:53.221 "write_zeroes": true, 00:19:53.221 "zcopy": true, 00:19:53.221 "get_zone_info": false, 00:19:53.221 "zone_management": false, 00:19:53.221 "zone_append": false, 00:19:53.221 "compare": false, 00:19:53.221 "compare_and_write": false, 00:19:53.221 "abort": true, 00:19:53.221 "seek_hole": false, 00:19:53.221 "seek_data": false, 00:19:53.221 "copy": true, 00:19:53.221 "nvme_iov_md": false 00:19:53.221 }, 00:19:53.221 "memory_domains": [ 00:19:53.221 { 00:19:53.221 "dma_device_id": "system", 00:19:53.221 "dma_device_type": 1 00:19:53.221 }, 00:19:53.221 { 00:19:53.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:53.221 "dma_device_type": 2 00:19:53.221 } 00:19:53.221 ], 00:19:53.221 "driver_specific": {} 00:19:53.221 } 00:19:53.221 ] 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.221 11:31:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.480 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.480 "name": "Existed_Raid", 00:19:53.480 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:53.480 "strip_size_kb": 64, 00:19:53.480 "state": "online", 00:19:53.480 "raid_level": "concat", 00:19:53.480 "superblock": true, 00:19:53.480 "num_base_bdevs": 3, 00:19:53.480 "num_base_bdevs_discovered": 3, 00:19:53.480 "num_base_bdevs_operational": 3, 00:19:53.480 "base_bdevs_list": [ 00:19:53.480 { 00:19:53.480 "name": "NewBaseBdev", 00:19:53.480 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:53.480 "is_configured": true, 00:19:53.480 "data_offset": 2048, 00:19:53.480 "data_size": 63488 00:19:53.480 }, 00:19:53.480 { 00:19:53.480 "name": "BaseBdev2", 00:19:53.480 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:53.480 "is_configured": true, 00:19:53.480 "data_offset": 2048, 00:19:53.480 "data_size": 63488 00:19:53.480 }, 00:19:53.480 { 00:19:53.480 "name": "BaseBdev3", 00:19:53.480 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:53.480 "is_configured": true, 00:19:53.480 "data_offset": 2048, 00:19:53.480 "data_size": 63488 00:19:53.480 } 00:19:53.480 ] 00:19:53.480 }' 00:19:53.480 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.480 11:31:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.047 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:54.047 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:54.047 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:54.047 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:54.047 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:54.047 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:54.047 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:54.047 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:54.306 [2024-07-13 11:31:28.979367] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:54.306 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:54.306 "name": "Existed_Raid", 00:19:54.306 "aliases": [ 00:19:54.306 "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f" 00:19:54.306 ], 00:19:54.306 "product_name": "Raid Volume", 00:19:54.306 "block_size": 512, 00:19:54.306 "num_blocks": 190464, 00:19:54.306 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:54.306 "assigned_rate_limits": { 00:19:54.306 "rw_ios_per_sec": 0, 00:19:54.306 "rw_mbytes_per_sec": 0, 00:19:54.306 "r_mbytes_per_sec": 0, 00:19:54.306 "w_mbytes_per_sec": 0 00:19:54.306 }, 00:19:54.306 "claimed": false, 00:19:54.306 "zoned": false, 00:19:54.306 "supported_io_types": { 00:19:54.306 "read": true, 00:19:54.306 "write": true, 00:19:54.306 "unmap": true, 00:19:54.306 "flush": true, 00:19:54.306 "reset": true, 00:19:54.306 "nvme_admin": false, 00:19:54.306 "nvme_io": false, 00:19:54.306 "nvme_io_md": false, 00:19:54.306 "write_zeroes": true, 00:19:54.306 "zcopy": false, 00:19:54.306 "get_zone_info": false, 00:19:54.306 "zone_management": false, 00:19:54.306 "zone_append": false, 00:19:54.306 "compare": false, 00:19:54.306 "compare_and_write": false, 00:19:54.306 "abort": false, 00:19:54.306 "seek_hole": false, 00:19:54.306 "seek_data": false, 00:19:54.306 "copy": false, 00:19:54.306 "nvme_iov_md": false 00:19:54.306 }, 00:19:54.306 "memory_domains": [ 00:19:54.306 { 00:19:54.306 "dma_device_id": "system", 00:19:54.306 "dma_device_type": 1 00:19:54.306 }, 00:19:54.306 { 00:19:54.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.306 "dma_device_type": 2 00:19:54.306 }, 00:19:54.306 { 00:19:54.306 "dma_device_id": "system", 00:19:54.306 "dma_device_type": 1 00:19:54.306 }, 00:19:54.306 { 00:19:54.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.306 "dma_device_type": 2 00:19:54.306 }, 00:19:54.306 { 00:19:54.306 "dma_device_id": "system", 00:19:54.306 "dma_device_type": 1 00:19:54.306 }, 00:19:54.306 { 00:19:54.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.306 "dma_device_type": 2 00:19:54.306 } 00:19:54.306 ], 00:19:54.306 "driver_specific": { 00:19:54.306 "raid": { 00:19:54.306 "uuid": "f964d2f4-41cb-45aa-8ca1-c07a9c4c129f", 00:19:54.306 "strip_size_kb": 64, 00:19:54.306 "state": "online", 00:19:54.306 "raid_level": "concat", 00:19:54.306 "superblock": true, 00:19:54.306 "num_base_bdevs": 3, 00:19:54.306 "num_base_bdevs_discovered": 3, 00:19:54.306 "num_base_bdevs_operational": 3, 00:19:54.306 "base_bdevs_list": [ 00:19:54.306 { 00:19:54.306 "name": "NewBaseBdev", 00:19:54.306 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:54.306 "is_configured": true, 00:19:54.306 "data_offset": 2048, 00:19:54.306 "data_size": 63488 00:19:54.306 }, 00:19:54.306 { 00:19:54.306 "name": "BaseBdev2", 00:19:54.306 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:54.306 "is_configured": true, 00:19:54.306 "data_offset": 2048, 00:19:54.306 "data_size": 63488 00:19:54.306 }, 00:19:54.306 { 00:19:54.306 "name": "BaseBdev3", 00:19:54.306 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:54.306 "is_configured": true, 00:19:54.306 "data_offset": 2048, 00:19:54.306 "data_size": 63488 00:19:54.306 } 00:19:54.306 ] 00:19:54.306 } 00:19:54.306 } 00:19:54.306 }' 00:19:54.306 11:31:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:54.306 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:54.306 BaseBdev2 00:19:54.306 BaseBdev3' 00:19:54.306 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:54.306 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:54.306 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:54.564 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:54.565 "name": "NewBaseBdev", 00:19:54.565 "aliases": [ 00:19:54.565 "7f7487ad-5474-40a9-bba4-e071264c83f6" 00:19:54.565 ], 00:19:54.565 "product_name": "Malloc disk", 00:19:54.565 "block_size": 512, 00:19:54.565 "num_blocks": 65536, 00:19:54.565 "uuid": "7f7487ad-5474-40a9-bba4-e071264c83f6", 00:19:54.565 "assigned_rate_limits": { 00:19:54.565 "rw_ios_per_sec": 0, 00:19:54.565 "rw_mbytes_per_sec": 0, 00:19:54.565 "r_mbytes_per_sec": 0, 00:19:54.565 "w_mbytes_per_sec": 0 00:19:54.565 }, 00:19:54.565 "claimed": true, 00:19:54.565 "claim_type": "exclusive_write", 00:19:54.565 "zoned": false, 00:19:54.565 "supported_io_types": { 00:19:54.565 "read": true, 00:19:54.565 "write": true, 00:19:54.565 "unmap": true, 00:19:54.565 "flush": true, 00:19:54.565 "reset": true, 00:19:54.565 "nvme_admin": false, 00:19:54.565 "nvme_io": false, 00:19:54.565 "nvme_io_md": false, 00:19:54.565 "write_zeroes": true, 00:19:54.565 "zcopy": true, 00:19:54.565 "get_zone_info": false, 00:19:54.565 "zone_management": false, 00:19:54.565 "zone_append": false, 00:19:54.565 "compare": false, 00:19:54.565 "compare_and_write": false, 00:19:54.565 "abort": true, 00:19:54.565 "seek_hole": false, 00:19:54.565 "seek_data": false, 00:19:54.565 "copy": true, 00:19:54.565 "nvme_iov_md": false 00:19:54.565 }, 00:19:54.565 "memory_domains": [ 00:19:54.565 { 00:19:54.565 "dma_device_id": "system", 00:19:54.565 "dma_device_type": 1 00:19:54.565 }, 00:19:54.565 { 00:19:54.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:54.565 "dma_device_type": 2 00:19:54.565 } 00:19:54.565 ], 00:19:54.565 "driver_specific": {} 00:19:54.565 }' 00:19:54.565 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.823 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:54.823 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:54.823 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.823 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:54.823 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:54.823 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:54.823 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.082 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:55.082 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.082 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.082 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:55.082 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.082 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:55.082 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:55.341 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:55.341 "name": "BaseBdev2", 00:19:55.341 "aliases": [ 00:19:55.341 "66df3887-0482-49a3-82fe-ae20a58ddbf1" 00:19:55.341 ], 00:19:55.341 "product_name": "Malloc disk", 00:19:55.341 "block_size": 512, 00:19:55.341 "num_blocks": 65536, 00:19:55.341 "uuid": "66df3887-0482-49a3-82fe-ae20a58ddbf1", 00:19:55.341 "assigned_rate_limits": { 00:19:55.341 "rw_ios_per_sec": 0, 00:19:55.341 "rw_mbytes_per_sec": 0, 00:19:55.341 "r_mbytes_per_sec": 0, 00:19:55.341 "w_mbytes_per_sec": 0 00:19:55.341 }, 00:19:55.341 "claimed": true, 00:19:55.341 "claim_type": "exclusive_write", 00:19:55.341 "zoned": false, 00:19:55.341 "supported_io_types": { 00:19:55.341 "read": true, 00:19:55.341 "write": true, 00:19:55.341 "unmap": true, 00:19:55.341 "flush": true, 00:19:55.341 "reset": true, 00:19:55.341 "nvme_admin": false, 00:19:55.341 "nvme_io": false, 00:19:55.341 "nvme_io_md": false, 00:19:55.341 "write_zeroes": true, 00:19:55.341 "zcopy": true, 00:19:55.341 "get_zone_info": false, 00:19:55.341 "zone_management": false, 00:19:55.341 "zone_append": false, 00:19:55.341 "compare": false, 00:19:55.341 "compare_and_write": false, 00:19:55.341 "abort": true, 00:19:55.341 "seek_hole": false, 00:19:55.341 "seek_data": false, 00:19:55.341 "copy": true, 00:19:55.341 "nvme_iov_md": false 00:19:55.341 }, 00:19:55.341 "memory_domains": [ 00:19:55.341 { 00:19:55.341 "dma_device_id": "system", 00:19:55.341 "dma_device_type": 1 00:19:55.341 }, 00:19:55.341 { 00:19:55.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.341 "dma_device_type": 2 00:19:55.341 } 00:19:55.341 ], 00:19:55.341 "driver_specific": {} 00:19:55.341 }' 00:19:55.341 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.341 11:31:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:55.341 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:55.341 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.599 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:55.599 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:55.599 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.599 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:55.599 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:55.599 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.599 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:55.858 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:55.858 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:55.858 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:55.858 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:56.117 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:56.117 "name": "BaseBdev3", 00:19:56.117 "aliases": [ 00:19:56.117 "2735a186-49a0-443e-b44c-452fb312efea" 00:19:56.117 ], 00:19:56.117 "product_name": "Malloc disk", 00:19:56.117 "block_size": 512, 00:19:56.117 "num_blocks": 65536, 00:19:56.117 "uuid": "2735a186-49a0-443e-b44c-452fb312efea", 00:19:56.117 "assigned_rate_limits": { 00:19:56.117 "rw_ios_per_sec": 0, 00:19:56.117 "rw_mbytes_per_sec": 0, 00:19:56.117 "r_mbytes_per_sec": 0, 00:19:56.117 "w_mbytes_per_sec": 0 00:19:56.117 }, 00:19:56.117 "claimed": true, 00:19:56.117 "claim_type": "exclusive_write", 00:19:56.117 "zoned": false, 00:19:56.117 "supported_io_types": { 00:19:56.117 "read": true, 00:19:56.117 "write": true, 00:19:56.117 "unmap": true, 00:19:56.117 "flush": true, 00:19:56.117 "reset": true, 00:19:56.117 "nvme_admin": false, 00:19:56.117 "nvme_io": false, 00:19:56.117 "nvme_io_md": false, 00:19:56.117 "write_zeroes": true, 00:19:56.117 "zcopy": true, 00:19:56.117 "get_zone_info": false, 00:19:56.117 "zone_management": false, 00:19:56.117 "zone_append": false, 00:19:56.117 "compare": false, 00:19:56.117 "compare_and_write": false, 00:19:56.117 "abort": true, 00:19:56.117 "seek_hole": false, 00:19:56.117 "seek_data": false, 00:19:56.117 "copy": true, 00:19:56.117 "nvme_iov_md": false 00:19:56.117 }, 00:19:56.117 "memory_domains": [ 00:19:56.117 { 00:19:56.117 "dma_device_id": "system", 00:19:56.117 "dma_device_type": 1 00:19:56.117 }, 00:19:56.117 { 00:19:56.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.117 "dma_device_type": 2 00:19:56.117 } 00:19:56.117 ], 00:19:56.117 "driver_specific": {} 00:19:56.117 }' 00:19:56.117 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.117 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:56.117 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:56.117 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.117 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:56.376 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:56.376 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.376 11:31:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:56.376 11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:56.376 11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.376 11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:56.633 11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:56.633 11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:56.633 [2024-07-13 11:31:31.379549] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:56.633 [2024-07-13 11:31:31.379588] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:56.633 [2024-07-13 11:31:31.379655] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.633 [2024-07-13 11:31:31.379707] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.633 [2024-07-13 11:31:31.379718] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 129244 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 129244 ']' 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 129244 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 129244 00:19:56.891 killing process with pid 129244 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 129244' 00:19:56.891 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 129244 00:19:56.892 11:31:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 129244 00:19:56.892 [2024-07-13 11:31:31.413678] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:56.892 [2024-07-13 11:31:31.610689] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:58.268 ************************************ 00:19:58.268 END TEST raid_state_function_test_sb 00:19:58.268 ************************************ 00:19:58.268 11:31:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:58.268 00:19:58.268 real 0m30.341s 00:19:58.268 user 0m57.246s 00:19:58.268 sys 0m3.219s 00:19:58.268 11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:58.268 11:31:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.268 11:31:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:58.268 11:31:32 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:19:58.268 11:31:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:58.268 11:31:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.268 11:31:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:58.268 ************************************ 00:19:58.268 START TEST raid_superblock_test 00:19:58.268 ************************************ 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=130284 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 130284 /var/tmp/spdk-raid.sock 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 130284 ']' 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:58.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.268 11:31:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:58.268 [2024-07-13 11:31:32.731331] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:58.268 [2024-07-13 11:31:32.731530] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130284 ] 00:19:58.268 [2024-07-13 11:31:32.890563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.526 [2024-07-13 11:31:33.089684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.526 [2024-07-13 11:31:33.253045] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:59.091 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:59.349 malloc1 00:19:59.349 11:31:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:59.607 [2024-07-13 11:31:34.146066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:59.607 [2024-07-13 11:31:34.146186] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.607 [2024-07-13 11:31:34.146221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:59.607 [2024-07-13 11:31:34.146241] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.607 [2024-07-13 11:31:34.148499] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.607 [2024-07-13 11:31:34.148568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:59.607 pt1 00:19:59.607 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:59.607 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:59.607 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:59.607 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:59.607 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:59.607 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:59.607 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:59.607 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:59.607 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:59.865 malloc2 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:59.865 [2024-07-13 11:31:34.561733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:59.865 [2024-07-13 11:31:34.561848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:59.865 [2024-07-13 11:31:34.561885] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:59.865 [2024-07-13 11:31:34.561905] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:59.865 [2024-07-13 11:31:34.563848] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:59.865 [2024-07-13 11:31:34.563896] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:59.865 pt2 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:59.865 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:00.123 malloc3 00:20:00.123 11:31:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:00.381 [2024-07-13 11:31:35.054394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:00.381 [2024-07-13 11:31:35.054504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.381 [2024-07-13 11:31:35.054542] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:20:00.381 [2024-07-13 11:31:35.054565] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.381 [2024-07-13 11:31:35.056533] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.381 [2024-07-13 11:31:35.056586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:00.381 pt3 00:20:00.381 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:00.381 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:00.381 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:00.639 [2024-07-13 11:31:35.246451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:00.639 [2024-07-13 11:31:35.248348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:00.639 [2024-07-13 11:31:35.248440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:00.639 [2024-07-13 11:31:35.248682] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:00.639 [2024-07-13 11:31:35.248708] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:00.639 [2024-07-13 11:31:35.248831] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:00.639 [2024-07-13 11:31:35.249196] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:00.639 [2024-07-13 11:31:35.249236] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:20:00.639 [2024-07-13 11:31:35.249399] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.639 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.898 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.898 "name": "raid_bdev1", 00:20:00.898 "uuid": "a06a7560-11e7-4c03-965b-3d62a04824e3", 00:20:00.898 "strip_size_kb": 64, 00:20:00.898 "state": "online", 00:20:00.898 "raid_level": "concat", 00:20:00.898 "superblock": true, 00:20:00.898 "num_base_bdevs": 3, 00:20:00.898 "num_base_bdevs_discovered": 3, 00:20:00.898 "num_base_bdevs_operational": 3, 00:20:00.898 "base_bdevs_list": [ 00:20:00.898 { 00:20:00.898 "name": "pt1", 00:20:00.898 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:00.898 "is_configured": true, 00:20:00.898 "data_offset": 2048, 00:20:00.898 "data_size": 63488 00:20:00.898 }, 00:20:00.898 { 00:20:00.898 "name": "pt2", 00:20:00.898 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:00.898 "is_configured": true, 00:20:00.898 "data_offset": 2048, 00:20:00.898 "data_size": 63488 00:20:00.898 }, 00:20:00.898 { 00:20:00.898 "name": "pt3", 00:20:00.898 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:00.898 "is_configured": true, 00:20:00.898 "data_offset": 2048, 00:20:00.898 "data_size": 63488 00:20:00.898 } 00:20:00.898 ] 00:20:00.898 }' 00:20:00.898 11:31:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.898 11:31:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.465 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:01.465 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:01.465 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:01.465 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:01.465 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:01.465 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:01.465 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:01.465 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:01.723 [2024-07-13 11:31:36.328192] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.723 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:01.723 "name": "raid_bdev1", 00:20:01.723 "aliases": [ 00:20:01.723 "a06a7560-11e7-4c03-965b-3d62a04824e3" 00:20:01.723 ], 00:20:01.723 "product_name": "Raid Volume", 00:20:01.724 "block_size": 512, 00:20:01.724 "num_blocks": 190464, 00:20:01.724 "uuid": "a06a7560-11e7-4c03-965b-3d62a04824e3", 00:20:01.724 "assigned_rate_limits": { 00:20:01.724 "rw_ios_per_sec": 0, 00:20:01.724 "rw_mbytes_per_sec": 0, 00:20:01.724 "r_mbytes_per_sec": 0, 00:20:01.724 "w_mbytes_per_sec": 0 00:20:01.724 }, 00:20:01.724 "claimed": false, 00:20:01.724 "zoned": false, 00:20:01.724 "supported_io_types": { 00:20:01.724 "read": true, 00:20:01.724 "write": true, 00:20:01.724 "unmap": true, 00:20:01.724 "flush": true, 00:20:01.724 "reset": true, 00:20:01.724 "nvme_admin": false, 00:20:01.724 "nvme_io": false, 00:20:01.724 "nvme_io_md": false, 00:20:01.724 "write_zeroes": true, 00:20:01.724 "zcopy": false, 00:20:01.724 "get_zone_info": false, 00:20:01.724 "zone_management": false, 00:20:01.724 "zone_append": false, 00:20:01.724 "compare": false, 00:20:01.724 "compare_and_write": false, 00:20:01.724 "abort": false, 00:20:01.724 "seek_hole": false, 00:20:01.724 "seek_data": false, 00:20:01.724 "copy": false, 00:20:01.724 "nvme_iov_md": false 00:20:01.724 }, 00:20:01.724 "memory_domains": [ 00:20:01.724 { 00:20:01.724 "dma_device_id": "system", 00:20:01.724 "dma_device_type": 1 00:20:01.724 }, 00:20:01.724 { 00:20:01.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.724 "dma_device_type": 2 00:20:01.724 }, 00:20:01.724 { 00:20:01.724 "dma_device_id": "system", 00:20:01.724 "dma_device_type": 1 00:20:01.724 }, 00:20:01.724 { 00:20:01.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.724 "dma_device_type": 2 00:20:01.724 }, 00:20:01.724 { 00:20:01.724 "dma_device_id": "system", 00:20:01.724 "dma_device_type": 1 00:20:01.724 }, 00:20:01.724 { 00:20:01.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.724 "dma_device_type": 2 00:20:01.724 } 00:20:01.724 ], 00:20:01.724 "driver_specific": { 00:20:01.724 "raid": { 00:20:01.724 "uuid": "a06a7560-11e7-4c03-965b-3d62a04824e3", 00:20:01.724 "strip_size_kb": 64, 00:20:01.724 "state": "online", 00:20:01.724 "raid_level": "concat", 00:20:01.724 "superblock": true, 00:20:01.724 "num_base_bdevs": 3, 00:20:01.724 "num_base_bdevs_discovered": 3, 00:20:01.724 "num_base_bdevs_operational": 3, 00:20:01.724 "base_bdevs_list": [ 00:20:01.724 { 00:20:01.724 "name": "pt1", 00:20:01.724 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:01.724 "is_configured": true, 00:20:01.724 "data_offset": 2048, 00:20:01.724 "data_size": 63488 00:20:01.724 }, 00:20:01.724 { 00:20:01.724 "name": "pt2", 00:20:01.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:01.724 "is_configured": true, 00:20:01.724 "data_offset": 2048, 00:20:01.724 "data_size": 63488 00:20:01.724 }, 00:20:01.724 { 00:20:01.724 "name": "pt3", 00:20:01.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:01.724 "is_configured": true, 00:20:01.724 "data_offset": 2048, 00:20:01.724 "data_size": 63488 00:20:01.724 } 00:20:01.724 ] 00:20:01.724 } 00:20:01.724 } 00:20:01.724 }' 00:20:01.724 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:01.724 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:01.724 pt2 00:20:01.724 pt3' 00:20:01.724 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:01.724 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:01.724 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:01.982 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:01.982 "name": "pt1", 00:20:01.982 "aliases": [ 00:20:01.982 "00000000-0000-0000-0000-000000000001" 00:20:01.982 ], 00:20:01.982 "product_name": "passthru", 00:20:01.982 "block_size": 512, 00:20:01.982 "num_blocks": 65536, 00:20:01.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:01.982 "assigned_rate_limits": { 00:20:01.982 "rw_ios_per_sec": 0, 00:20:01.982 "rw_mbytes_per_sec": 0, 00:20:01.982 "r_mbytes_per_sec": 0, 00:20:01.982 "w_mbytes_per_sec": 0 00:20:01.982 }, 00:20:01.982 "claimed": true, 00:20:01.982 "claim_type": "exclusive_write", 00:20:01.982 "zoned": false, 00:20:01.982 "supported_io_types": { 00:20:01.982 "read": true, 00:20:01.982 "write": true, 00:20:01.982 "unmap": true, 00:20:01.982 "flush": true, 00:20:01.982 "reset": true, 00:20:01.982 "nvme_admin": false, 00:20:01.982 "nvme_io": false, 00:20:01.982 "nvme_io_md": false, 00:20:01.983 "write_zeroes": true, 00:20:01.983 "zcopy": true, 00:20:01.983 "get_zone_info": false, 00:20:01.983 "zone_management": false, 00:20:01.983 "zone_append": false, 00:20:01.983 "compare": false, 00:20:01.983 "compare_and_write": false, 00:20:01.983 "abort": true, 00:20:01.983 "seek_hole": false, 00:20:01.983 "seek_data": false, 00:20:01.983 "copy": true, 00:20:01.983 "nvme_iov_md": false 00:20:01.983 }, 00:20:01.983 "memory_domains": [ 00:20:01.983 { 00:20:01.983 "dma_device_id": "system", 00:20:01.983 "dma_device_type": 1 00:20:01.983 }, 00:20:01.983 { 00:20:01.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.983 "dma_device_type": 2 00:20:01.983 } 00:20:01.983 ], 00:20:01.983 "driver_specific": { 00:20:01.983 "passthru": { 00:20:01.983 "name": "pt1", 00:20:01.983 "base_bdev_name": "malloc1" 00:20:01.983 } 00:20:01.983 } 00:20:01.983 }' 00:20:01.983 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:01.983 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.241 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:02.241 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.241 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.241 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:02.241 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.241 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.241 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.241 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.500 11:31:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.500 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.500 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:02.500 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:02.500 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:02.500 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:02.500 "name": "pt2", 00:20:02.500 "aliases": [ 00:20:02.500 "00000000-0000-0000-0000-000000000002" 00:20:02.500 ], 00:20:02.500 "product_name": "passthru", 00:20:02.500 "block_size": 512, 00:20:02.500 "num_blocks": 65536, 00:20:02.500 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:02.500 "assigned_rate_limits": { 00:20:02.500 "rw_ios_per_sec": 0, 00:20:02.500 "rw_mbytes_per_sec": 0, 00:20:02.500 "r_mbytes_per_sec": 0, 00:20:02.500 "w_mbytes_per_sec": 0 00:20:02.500 }, 00:20:02.500 "claimed": true, 00:20:02.500 "claim_type": "exclusive_write", 00:20:02.500 "zoned": false, 00:20:02.500 "supported_io_types": { 00:20:02.500 "read": true, 00:20:02.500 "write": true, 00:20:02.500 "unmap": true, 00:20:02.500 "flush": true, 00:20:02.500 "reset": true, 00:20:02.500 "nvme_admin": false, 00:20:02.500 "nvme_io": false, 00:20:02.500 "nvme_io_md": false, 00:20:02.500 "write_zeroes": true, 00:20:02.500 "zcopy": true, 00:20:02.500 "get_zone_info": false, 00:20:02.500 "zone_management": false, 00:20:02.500 "zone_append": false, 00:20:02.500 "compare": false, 00:20:02.500 "compare_and_write": false, 00:20:02.500 "abort": true, 00:20:02.500 "seek_hole": false, 00:20:02.500 "seek_data": false, 00:20:02.500 "copy": true, 00:20:02.500 "nvme_iov_md": false 00:20:02.500 }, 00:20:02.500 "memory_domains": [ 00:20:02.500 { 00:20:02.500 "dma_device_id": "system", 00:20:02.500 "dma_device_type": 1 00:20:02.500 }, 00:20:02.500 { 00:20:02.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.500 "dma_device_type": 2 00:20:02.500 } 00:20:02.500 ], 00:20:02.500 "driver_specific": { 00:20:02.500 "passthru": { 00:20:02.500 "name": "pt2", 00:20:02.500 "base_bdev_name": "malloc2" 00:20:02.500 } 00:20:02.500 } 00:20:02.500 }' 00:20:02.500 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.758 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.758 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:02.758 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.758 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.758 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:02.758 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.758 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.017 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.017 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.017 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.017 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.017 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.017 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:03.017 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.276 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.276 "name": "pt3", 00:20:03.276 "aliases": [ 00:20:03.276 "00000000-0000-0000-0000-000000000003" 00:20:03.276 ], 00:20:03.276 "product_name": "passthru", 00:20:03.276 "block_size": 512, 00:20:03.276 "num_blocks": 65536, 00:20:03.276 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:03.276 "assigned_rate_limits": { 00:20:03.276 "rw_ios_per_sec": 0, 00:20:03.276 "rw_mbytes_per_sec": 0, 00:20:03.276 "r_mbytes_per_sec": 0, 00:20:03.276 "w_mbytes_per_sec": 0 00:20:03.276 }, 00:20:03.276 "claimed": true, 00:20:03.276 "claim_type": "exclusive_write", 00:20:03.276 "zoned": false, 00:20:03.276 "supported_io_types": { 00:20:03.276 "read": true, 00:20:03.276 "write": true, 00:20:03.276 "unmap": true, 00:20:03.276 "flush": true, 00:20:03.276 "reset": true, 00:20:03.276 "nvme_admin": false, 00:20:03.276 "nvme_io": false, 00:20:03.276 "nvme_io_md": false, 00:20:03.276 "write_zeroes": true, 00:20:03.276 "zcopy": true, 00:20:03.276 "get_zone_info": false, 00:20:03.276 "zone_management": false, 00:20:03.276 "zone_append": false, 00:20:03.276 "compare": false, 00:20:03.276 "compare_and_write": false, 00:20:03.276 "abort": true, 00:20:03.276 "seek_hole": false, 00:20:03.276 "seek_data": false, 00:20:03.276 "copy": true, 00:20:03.276 "nvme_iov_md": false 00:20:03.276 }, 00:20:03.276 "memory_domains": [ 00:20:03.276 { 00:20:03.276 "dma_device_id": "system", 00:20:03.276 "dma_device_type": 1 00:20:03.276 }, 00:20:03.276 { 00:20:03.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.276 "dma_device_type": 2 00:20:03.276 } 00:20:03.276 ], 00:20:03.276 "driver_specific": { 00:20:03.276 "passthru": { 00:20:03.276 "name": "pt3", 00:20:03.276 "base_bdev_name": "malloc3" 00:20:03.276 } 00:20:03.276 } 00:20:03.276 }' 00:20:03.276 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.276 11:31:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.535 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.535 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.535 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.535 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.535 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.535 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.535 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.535 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.535 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.795 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.795 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:03.795 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:03.795 [2024-07-13 11:31:38.492279] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:03.795 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a06a7560-11e7-4c03-965b-3d62a04824e3 00:20:03.795 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a06a7560-11e7-4c03-965b-3d62a04824e3 ']' 00:20:03.795 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:04.070 [2024-07-13 11:31:38.760104] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:04.070 [2024-07-13 11:31:38.760127] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:04.070 [2024-07-13 11:31:38.760217] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:04.070 [2024-07-13 11:31:38.760281] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:04.070 [2024-07-13 11:31:38.760292] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:20:04.070 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.070 11:31:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:04.339 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:04.339 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:04.339 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:04.339 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:04.598 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:04.598 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:04.857 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:04.857 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:05.115 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:05.115 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:05.374 11:31:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:05.374 [2024-07-13 11:31:40.055255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:05.374 [2024-07-13 11:31:40.057028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:05.374 [2024-07-13 11:31:40.057100] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:05.374 [2024-07-13 11:31:40.057165] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:05.374 [2024-07-13 11:31:40.057252] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:05.374 [2024-07-13 11:31:40.057290] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:05.374 [2024-07-13 11:31:40.057319] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:05.374 [2024-07-13 11:31:40.057331] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:20:05.374 request: 00:20:05.374 { 00:20:05.374 "name": "raid_bdev1", 00:20:05.374 "raid_level": "concat", 00:20:05.374 "base_bdevs": [ 00:20:05.374 "malloc1", 00:20:05.374 "malloc2", 00:20:05.374 "malloc3" 00:20:05.374 ], 00:20:05.374 "strip_size_kb": 64, 00:20:05.374 "superblock": false, 00:20:05.374 "method": "bdev_raid_create", 00:20:05.374 "req_id": 1 00:20:05.374 } 00:20:05.374 Got JSON-RPC error response 00:20:05.374 response: 00:20:05.374 { 00:20:05.374 "code": -17, 00:20:05.374 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:05.374 } 00:20:05.374 11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:05.374 11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:05.374 11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:05.374 11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:05.374 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.374 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:05.633 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:05.633 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:05.633 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:05.891 [2024-07-13 11:31:40.443233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:05.891 [2024-07-13 11:31:40.443287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.891 [2024-07-13 11:31:40.443321] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:05.891 [2024-07-13 11:31:40.443339] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.891 [2024-07-13 11:31:40.445218] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.891 [2024-07-13 11:31:40.445263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:05.891 [2024-07-13 11:31:40.445356] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:05.891 [2024-07-13 11:31:40.445414] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:05.891 pt1 00:20:05.891 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:05.891 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:05.891 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:05.891 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:05.891 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:05.891 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:05.891 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:05.891 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:05.892 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:05.892 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:05.892 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.892 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.150 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.150 "name": "raid_bdev1", 00:20:06.150 "uuid": "a06a7560-11e7-4c03-965b-3d62a04824e3", 00:20:06.150 "strip_size_kb": 64, 00:20:06.150 "state": "configuring", 00:20:06.150 "raid_level": "concat", 00:20:06.150 "superblock": true, 00:20:06.150 "num_base_bdevs": 3, 00:20:06.150 "num_base_bdevs_discovered": 1, 00:20:06.150 "num_base_bdevs_operational": 3, 00:20:06.150 "base_bdevs_list": [ 00:20:06.150 { 00:20:06.150 "name": "pt1", 00:20:06.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:06.150 "is_configured": true, 00:20:06.150 "data_offset": 2048, 00:20:06.150 "data_size": 63488 00:20:06.150 }, 00:20:06.150 { 00:20:06.150 "name": null, 00:20:06.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:06.150 "is_configured": false, 00:20:06.150 "data_offset": 2048, 00:20:06.150 "data_size": 63488 00:20:06.150 }, 00:20:06.150 { 00:20:06.150 "name": null, 00:20:06.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:06.150 "is_configured": false, 00:20:06.150 "data_offset": 2048, 00:20:06.150 "data_size": 63488 00:20:06.150 } 00:20:06.150 ] 00:20:06.150 }' 00:20:06.150 11:31:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.150 11:31:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.717 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:20:06.717 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:06.976 [2024-07-13 11:31:41.575813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:06.976 [2024-07-13 11:31:41.575886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:06.976 [2024-07-13 11:31:41.575923] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:06.976 [2024-07-13 11:31:41.575943] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:06.976 [2024-07-13 11:31:41.576423] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:06.976 [2024-07-13 11:31:41.576461] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:06.976 [2024-07-13 11:31:41.576569] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:06.976 [2024-07-13 11:31:41.576621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:06.976 pt2 00:20:06.976 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:07.235 [2024-07-13 11:31:41.771846] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.235 11:31:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.493 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:07.493 "name": "raid_bdev1", 00:20:07.493 "uuid": "a06a7560-11e7-4c03-965b-3d62a04824e3", 00:20:07.493 "strip_size_kb": 64, 00:20:07.493 "state": "configuring", 00:20:07.493 "raid_level": "concat", 00:20:07.493 "superblock": true, 00:20:07.493 "num_base_bdevs": 3, 00:20:07.493 "num_base_bdevs_discovered": 1, 00:20:07.493 "num_base_bdevs_operational": 3, 00:20:07.493 "base_bdevs_list": [ 00:20:07.493 { 00:20:07.493 "name": "pt1", 00:20:07.493 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:07.493 "is_configured": true, 00:20:07.493 "data_offset": 2048, 00:20:07.493 "data_size": 63488 00:20:07.493 }, 00:20:07.493 { 00:20:07.493 "name": null, 00:20:07.493 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:07.493 "is_configured": false, 00:20:07.493 "data_offset": 2048, 00:20:07.493 "data_size": 63488 00:20:07.493 }, 00:20:07.493 { 00:20:07.493 "name": null, 00:20:07.493 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:07.493 "is_configured": false, 00:20:07.493 "data_offset": 2048, 00:20:07.493 "data_size": 63488 00:20:07.493 } 00:20:07.493 ] 00:20:07.493 }' 00:20:07.493 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:07.493 11:31:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.060 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:08.060 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:08.060 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:08.317 [2024-07-13 11:31:42.879999] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:08.317 [2024-07-13 11:31:42.880090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.317 [2024-07-13 11:31:42.880121] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:08.317 [2024-07-13 11:31:42.880143] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.317 [2024-07-13 11:31:42.880607] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.317 [2024-07-13 11:31:42.880673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:08.317 [2024-07-13 11:31:42.880778] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:08.317 [2024-07-13 11:31:42.880804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:08.317 pt2 00:20:08.317 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:08.317 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:08.317 11:31:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:08.573 [2024-07-13 11:31:43.144044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:08.573 [2024-07-13 11:31:43.144120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:08.573 [2024-07-13 11:31:43.144147] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:08.573 [2024-07-13 11:31:43.144172] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:08.573 [2024-07-13 11:31:43.144627] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:08.573 [2024-07-13 11:31:43.144677] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:08.573 [2024-07-13 11:31:43.144779] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:08.573 [2024-07-13 11:31:43.144804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:08.573 [2024-07-13 11:31:43.144933] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:20:08.573 [2024-07-13 11:31:43.144962] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:08.573 [2024-07-13 11:31:43.145055] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:08.573 [2024-07-13 11:31:43.145386] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:20:08.573 [2024-07-13 11:31:43.145411] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:20:08.573 [2024-07-13 11:31:43.145541] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:08.573 pt3 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.573 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:08.831 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.831 "name": "raid_bdev1", 00:20:08.831 "uuid": "a06a7560-11e7-4c03-965b-3d62a04824e3", 00:20:08.831 "strip_size_kb": 64, 00:20:08.831 "state": "online", 00:20:08.831 "raid_level": "concat", 00:20:08.831 "superblock": true, 00:20:08.831 "num_base_bdevs": 3, 00:20:08.831 "num_base_bdevs_discovered": 3, 00:20:08.831 "num_base_bdevs_operational": 3, 00:20:08.831 "base_bdevs_list": [ 00:20:08.831 { 00:20:08.831 "name": "pt1", 00:20:08.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:08.831 "is_configured": true, 00:20:08.831 "data_offset": 2048, 00:20:08.831 "data_size": 63488 00:20:08.831 }, 00:20:08.831 { 00:20:08.831 "name": "pt2", 00:20:08.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:08.831 "is_configured": true, 00:20:08.831 "data_offset": 2048, 00:20:08.831 "data_size": 63488 00:20:08.831 }, 00:20:08.831 { 00:20:08.831 "name": "pt3", 00:20:08.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:08.831 "is_configured": true, 00:20:08.831 "data_offset": 2048, 00:20:08.831 "data_size": 63488 00:20:08.831 } 00:20:08.831 ] 00:20:08.831 }' 00:20:08.831 11:31:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.831 11:31:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.397 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:09.397 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:09.397 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:09.397 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:09.397 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:09.397 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:09.397 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:09.397 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:09.656 [2024-07-13 11:31:44.271516] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:09.656 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:09.656 "name": "raid_bdev1", 00:20:09.656 "aliases": [ 00:20:09.656 "a06a7560-11e7-4c03-965b-3d62a04824e3" 00:20:09.656 ], 00:20:09.656 "product_name": "Raid Volume", 00:20:09.656 "block_size": 512, 00:20:09.656 "num_blocks": 190464, 00:20:09.656 "uuid": "a06a7560-11e7-4c03-965b-3d62a04824e3", 00:20:09.656 "assigned_rate_limits": { 00:20:09.656 "rw_ios_per_sec": 0, 00:20:09.656 "rw_mbytes_per_sec": 0, 00:20:09.656 "r_mbytes_per_sec": 0, 00:20:09.656 "w_mbytes_per_sec": 0 00:20:09.656 }, 00:20:09.656 "claimed": false, 00:20:09.656 "zoned": false, 00:20:09.656 "supported_io_types": { 00:20:09.656 "read": true, 00:20:09.656 "write": true, 00:20:09.656 "unmap": true, 00:20:09.656 "flush": true, 00:20:09.656 "reset": true, 00:20:09.656 "nvme_admin": false, 00:20:09.656 "nvme_io": false, 00:20:09.656 "nvme_io_md": false, 00:20:09.656 "write_zeroes": true, 00:20:09.656 "zcopy": false, 00:20:09.656 "get_zone_info": false, 00:20:09.656 "zone_management": false, 00:20:09.656 "zone_append": false, 00:20:09.656 "compare": false, 00:20:09.656 "compare_and_write": false, 00:20:09.656 "abort": false, 00:20:09.656 "seek_hole": false, 00:20:09.656 "seek_data": false, 00:20:09.656 "copy": false, 00:20:09.656 "nvme_iov_md": false 00:20:09.656 }, 00:20:09.656 "memory_domains": [ 00:20:09.656 { 00:20:09.656 "dma_device_id": "system", 00:20:09.656 "dma_device_type": 1 00:20:09.656 }, 00:20:09.656 { 00:20:09.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.656 "dma_device_type": 2 00:20:09.656 }, 00:20:09.656 { 00:20:09.656 "dma_device_id": "system", 00:20:09.656 "dma_device_type": 1 00:20:09.656 }, 00:20:09.656 { 00:20:09.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.656 "dma_device_type": 2 00:20:09.656 }, 00:20:09.656 { 00:20:09.656 "dma_device_id": "system", 00:20:09.656 "dma_device_type": 1 00:20:09.656 }, 00:20:09.656 { 00:20:09.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.656 "dma_device_type": 2 00:20:09.656 } 00:20:09.656 ], 00:20:09.656 "driver_specific": { 00:20:09.656 "raid": { 00:20:09.656 "uuid": "a06a7560-11e7-4c03-965b-3d62a04824e3", 00:20:09.656 "strip_size_kb": 64, 00:20:09.656 "state": "online", 00:20:09.656 "raid_level": "concat", 00:20:09.656 "superblock": true, 00:20:09.656 "num_base_bdevs": 3, 00:20:09.656 "num_base_bdevs_discovered": 3, 00:20:09.656 "num_base_bdevs_operational": 3, 00:20:09.656 "base_bdevs_list": [ 00:20:09.656 { 00:20:09.656 "name": "pt1", 00:20:09.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.656 "is_configured": true, 00:20:09.656 "data_offset": 2048, 00:20:09.656 "data_size": 63488 00:20:09.656 }, 00:20:09.656 { 00:20:09.656 "name": "pt2", 00:20:09.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:09.656 "is_configured": true, 00:20:09.656 "data_offset": 2048, 00:20:09.656 "data_size": 63488 00:20:09.656 }, 00:20:09.656 { 00:20:09.656 "name": "pt3", 00:20:09.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:09.656 "is_configured": true, 00:20:09.656 "data_offset": 2048, 00:20:09.656 "data_size": 63488 00:20:09.656 } 00:20:09.656 ] 00:20:09.656 } 00:20:09.656 } 00:20:09.656 }' 00:20:09.656 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:09.656 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:09.656 pt2 00:20:09.656 pt3' 00:20:09.656 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:09.656 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:09.656 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:09.915 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:09.915 "name": "pt1", 00:20:09.915 "aliases": [ 00:20:09.915 "00000000-0000-0000-0000-000000000001" 00:20:09.915 ], 00:20:09.915 "product_name": "passthru", 00:20:09.915 "block_size": 512, 00:20:09.915 "num_blocks": 65536, 00:20:09.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:09.915 "assigned_rate_limits": { 00:20:09.915 "rw_ios_per_sec": 0, 00:20:09.915 "rw_mbytes_per_sec": 0, 00:20:09.915 "r_mbytes_per_sec": 0, 00:20:09.915 "w_mbytes_per_sec": 0 00:20:09.915 }, 00:20:09.915 "claimed": true, 00:20:09.915 "claim_type": "exclusive_write", 00:20:09.915 "zoned": false, 00:20:09.915 "supported_io_types": { 00:20:09.915 "read": true, 00:20:09.915 "write": true, 00:20:09.915 "unmap": true, 00:20:09.915 "flush": true, 00:20:09.915 "reset": true, 00:20:09.915 "nvme_admin": false, 00:20:09.915 "nvme_io": false, 00:20:09.915 "nvme_io_md": false, 00:20:09.915 "write_zeroes": true, 00:20:09.915 "zcopy": true, 00:20:09.915 "get_zone_info": false, 00:20:09.915 "zone_management": false, 00:20:09.915 "zone_append": false, 00:20:09.915 "compare": false, 00:20:09.915 "compare_and_write": false, 00:20:09.915 "abort": true, 00:20:09.915 "seek_hole": false, 00:20:09.915 "seek_data": false, 00:20:09.915 "copy": true, 00:20:09.915 "nvme_iov_md": false 00:20:09.915 }, 00:20:09.915 "memory_domains": [ 00:20:09.915 { 00:20:09.915 "dma_device_id": "system", 00:20:09.915 "dma_device_type": 1 00:20:09.915 }, 00:20:09.915 { 00:20:09.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.915 "dma_device_type": 2 00:20:09.915 } 00:20:09.915 ], 00:20:09.915 "driver_specific": { 00:20:09.915 "passthru": { 00:20:09.915 "name": "pt1", 00:20:09.915 "base_bdev_name": "malloc1" 00:20:09.915 } 00:20:09.915 } 00:20:09.915 }' 00:20:09.915 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.172 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.172 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.172 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.172 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.172 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.172 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.431 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.431 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:10.431 11:31:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.431 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:10.431 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:10.431 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:10.431 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:10.431 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:10.689 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:10.689 "name": "pt2", 00:20:10.689 "aliases": [ 00:20:10.689 "00000000-0000-0000-0000-000000000002" 00:20:10.689 ], 00:20:10.689 "product_name": "passthru", 00:20:10.689 "block_size": 512, 00:20:10.689 "num_blocks": 65536, 00:20:10.689 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:10.689 "assigned_rate_limits": { 00:20:10.689 "rw_ios_per_sec": 0, 00:20:10.689 "rw_mbytes_per_sec": 0, 00:20:10.689 "r_mbytes_per_sec": 0, 00:20:10.689 "w_mbytes_per_sec": 0 00:20:10.689 }, 00:20:10.689 "claimed": true, 00:20:10.689 "claim_type": "exclusive_write", 00:20:10.689 "zoned": false, 00:20:10.689 "supported_io_types": { 00:20:10.689 "read": true, 00:20:10.689 "write": true, 00:20:10.689 "unmap": true, 00:20:10.689 "flush": true, 00:20:10.689 "reset": true, 00:20:10.689 "nvme_admin": false, 00:20:10.689 "nvme_io": false, 00:20:10.689 "nvme_io_md": false, 00:20:10.689 "write_zeroes": true, 00:20:10.689 "zcopy": true, 00:20:10.689 "get_zone_info": false, 00:20:10.690 "zone_management": false, 00:20:10.690 "zone_append": false, 00:20:10.690 "compare": false, 00:20:10.690 "compare_and_write": false, 00:20:10.690 "abort": true, 00:20:10.690 "seek_hole": false, 00:20:10.690 "seek_data": false, 00:20:10.690 "copy": true, 00:20:10.690 "nvme_iov_md": false 00:20:10.690 }, 00:20:10.690 "memory_domains": [ 00:20:10.690 { 00:20:10.690 "dma_device_id": "system", 00:20:10.690 "dma_device_type": 1 00:20:10.690 }, 00:20:10.690 { 00:20:10.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.690 "dma_device_type": 2 00:20:10.690 } 00:20:10.690 ], 00:20:10.690 "driver_specific": { 00:20:10.690 "passthru": { 00:20:10.690 "name": "pt2", 00:20:10.690 "base_bdev_name": "malloc2" 00:20:10.690 } 00:20:10.690 } 00:20:10.690 }' 00:20:10.690 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.690 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:10.948 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:10.948 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.948 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:10.948 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:10.948 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:10.948 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.206 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.206 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.206 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.206 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.206 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:11.206 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:11.206 11:31:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:11.464 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:11.464 "name": "pt3", 00:20:11.464 "aliases": [ 00:20:11.464 "00000000-0000-0000-0000-000000000003" 00:20:11.464 ], 00:20:11.464 "product_name": "passthru", 00:20:11.464 "block_size": 512, 00:20:11.464 "num_blocks": 65536, 00:20:11.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:11.464 "assigned_rate_limits": { 00:20:11.464 "rw_ios_per_sec": 0, 00:20:11.464 "rw_mbytes_per_sec": 0, 00:20:11.464 "r_mbytes_per_sec": 0, 00:20:11.464 "w_mbytes_per_sec": 0 00:20:11.464 }, 00:20:11.464 "claimed": true, 00:20:11.464 "claim_type": "exclusive_write", 00:20:11.464 "zoned": false, 00:20:11.464 "supported_io_types": { 00:20:11.464 "read": true, 00:20:11.464 "write": true, 00:20:11.464 "unmap": true, 00:20:11.464 "flush": true, 00:20:11.464 "reset": true, 00:20:11.464 "nvme_admin": false, 00:20:11.464 "nvme_io": false, 00:20:11.464 "nvme_io_md": false, 00:20:11.464 "write_zeroes": true, 00:20:11.464 "zcopy": true, 00:20:11.464 "get_zone_info": false, 00:20:11.464 "zone_management": false, 00:20:11.464 "zone_append": false, 00:20:11.464 "compare": false, 00:20:11.464 "compare_and_write": false, 00:20:11.464 "abort": true, 00:20:11.464 "seek_hole": false, 00:20:11.464 "seek_data": false, 00:20:11.464 "copy": true, 00:20:11.464 "nvme_iov_md": false 00:20:11.464 }, 00:20:11.464 "memory_domains": [ 00:20:11.464 { 00:20:11.464 "dma_device_id": "system", 00:20:11.464 "dma_device_type": 1 00:20:11.464 }, 00:20:11.464 { 00:20:11.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.464 "dma_device_type": 2 00:20:11.464 } 00:20:11.464 ], 00:20:11.464 "driver_specific": { 00:20:11.464 "passthru": { 00:20:11.464 "name": "pt3", 00:20:11.464 "base_bdev_name": "malloc3" 00:20:11.464 } 00:20:11.464 } 00:20:11.464 }' 00:20:11.464 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.464 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:11.464 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:11.464 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.464 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:11.722 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:11.722 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.722 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:11.722 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:11.722 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.722 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:11.722 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:11.722 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:11.722 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:11.980 [2024-07-13 11:31:46.719930] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.238 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a06a7560-11e7-4c03-965b-3d62a04824e3 '!=' a06a7560-11e7-4c03-965b-3d62a04824e3 ']' 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 130284 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 130284 ']' 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 130284 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130284 00:20:12.239 killing process with pid 130284 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130284' 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 130284 00:20:12.239 11:31:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 130284 00:20:12.239 [2024-07-13 11:31:46.756740] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.239 [2024-07-13 11:31:46.756806] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.239 [2024-07-13 11:31:46.756854] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.239 [2024-07-13 11:31:46.756864] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:20:12.239 [2024-07-13 11:31:46.948063] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:13.171 ************************************ 00:20:13.171 END TEST raid_superblock_test 00:20:13.171 ************************************ 00:20:13.171 11:31:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:13.171 00:20:13.171 real 0m15.168s 00:20:13.171 user 0m27.623s 00:20:13.171 sys 0m1.693s 00:20:13.171 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.171 11:31:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.171 11:31:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:13.171 11:31:47 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:20:13.171 11:31:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:13.171 11:31:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.172 11:31:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:13.172 ************************************ 00:20:13.172 START TEST raid_read_error_test 00:20:13.172 ************************************ 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dH7kJu62Kr 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=130798 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 130798 /var/tmp/spdk-raid.sock 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 130798 ']' 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:13.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.172 11:31:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.430 [2024-07-13 11:31:47.982979] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:13.430 [2024-07-13 11:31:47.983360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130798 ] 00:20:13.430 [2024-07-13 11:31:48.158173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.687 [2024-07-13 11:31:48.384343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.945 [2024-07-13 11:31:48.550000] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.510 11:31:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.510 11:31:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:14.510 11:31:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:14.510 11:31:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:14.510 BaseBdev1_malloc 00:20:14.510 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:14.769 true 00:20:14.769 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:15.028 [2024-07-13 11:31:49.703598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:15.028 [2024-07-13 11:31:49.703710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.028 [2024-07-13 11:31:49.703748] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:15.028 [2024-07-13 11:31:49.703769] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.028 [2024-07-13 11:31:49.705989] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.028 [2024-07-13 11:31:49.706037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:15.028 BaseBdev1 00:20:15.028 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:15.028 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:15.286 BaseBdev2_malloc 00:20:15.286 11:31:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:15.545 true 00:20:15.545 11:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:15.804 [2024-07-13 11:31:50.434677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:15.804 [2024-07-13 11:31:50.434766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.804 [2024-07-13 11:31:50.434801] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:15.804 [2024-07-13 11:31:50.434820] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.804 [2024-07-13 11:31:50.436775] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.804 [2024-07-13 11:31:50.436825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:15.804 BaseBdev2 00:20:15.804 11:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:15.804 11:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:16.063 BaseBdev3_malloc 00:20:16.063 11:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:16.322 true 00:20:16.322 11:31:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:16.322 [2024-07-13 11:31:51.039660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:16.322 [2024-07-13 11:31:51.039744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.322 [2024-07-13 11:31:51.039780] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:16.322 [2024-07-13 11:31:51.039808] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.322 [2024-07-13 11:31:51.041920] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.322 [2024-07-13 11:31:51.041973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:16.322 BaseBdev3 00:20:16.322 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:16.581 [2024-07-13 11:31:51.223749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:16.581 [2024-07-13 11:31:51.225557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:16.581 [2024-07-13 11:31:51.225641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.581 [2024-07-13 11:31:51.225851] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:16.581 [2024-07-13 11:31:51.225872] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:16.581 [2024-07-13 11:31:51.226018] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:16.581 [2024-07-13 11:31:51.226352] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:16.581 [2024-07-13 11:31:51.226372] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:16.581 [2024-07-13 11:31:51.226492] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.581 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.839 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.839 "name": "raid_bdev1", 00:20:16.839 "uuid": "5d4de460-41fd-47d6-a91b-73c719123ab8", 00:20:16.839 "strip_size_kb": 64, 00:20:16.839 "state": "online", 00:20:16.839 "raid_level": "concat", 00:20:16.839 "superblock": true, 00:20:16.839 "num_base_bdevs": 3, 00:20:16.839 "num_base_bdevs_discovered": 3, 00:20:16.839 "num_base_bdevs_operational": 3, 00:20:16.839 "base_bdevs_list": [ 00:20:16.839 { 00:20:16.839 "name": "BaseBdev1", 00:20:16.839 "uuid": "8a5b4bc4-5eb2-56dd-8afa-b1286746c6b6", 00:20:16.839 "is_configured": true, 00:20:16.839 "data_offset": 2048, 00:20:16.839 "data_size": 63488 00:20:16.839 }, 00:20:16.839 { 00:20:16.839 "name": "BaseBdev2", 00:20:16.839 "uuid": "dee181f4-89b1-5b8e-8196-827c0a6c1b63", 00:20:16.839 "is_configured": true, 00:20:16.839 "data_offset": 2048, 00:20:16.839 "data_size": 63488 00:20:16.839 }, 00:20:16.839 { 00:20:16.839 "name": "BaseBdev3", 00:20:16.839 "uuid": "604be872-2f94-5cf2-ab5d-5877fd66c9b5", 00:20:16.839 "is_configured": true, 00:20:16.839 "data_offset": 2048, 00:20:16.839 "data_size": 63488 00:20:16.839 } 00:20:16.839 ] 00:20:16.839 }' 00:20:16.839 11:31:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.839 11:31:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.406 11:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:17.406 11:31:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:17.664 [2024-07-13 11:31:52.196872] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:18.596 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.854 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.111 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.111 "name": "raid_bdev1", 00:20:19.111 "uuid": "5d4de460-41fd-47d6-a91b-73c719123ab8", 00:20:19.111 "strip_size_kb": 64, 00:20:19.111 "state": "online", 00:20:19.111 "raid_level": "concat", 00:20:19.111 "superblock": true, 00:20:19.111 "num_base_bdevs": 3, 00:20:19.111 "num_base_bdevs_discovered": 3, 00:20:19.111 "num_base_bdevs_operational": 3, 00:20:19.111 "base_bdevs_list": [ 00:20:19.111 { 00:20:19.111 "name": "BaseBdev1", 00:20:19.111 "uuid": "8a5b4bc4-5eb2-56dd-8afa-b1286746c6b6", 00:20:19.111 "is_configured": true, 00:20:19.111 "data_offset": 2048, 00:20:19.111 "data_size": 63488 00:20:19.111 }, 00:20:19.111 { 00:20:19.111 "name": "BaseBdev2", 00:20:19.111 "uuid": "dee181f4-89b1-5b8e-8196-827c0a6c1b63", 00:20:19.111 "is_configured": true, 00:20:19.111 "data_offset": 2048, 00:20:19.111 "data_size": 63488 00:20:19.111 }, 00:20:19.111 { 00:20:19.111 "name": "BaseBdev3", 00:20:19.111 "uuid": "604be872-2f94-5cf2-ab5d-5877fd66c9b5", 00:20:19.111 "is_configured": true, 00:20:19.111 "data_offset": 2048, 00:20:19.111 "data_size": 63488 00:20:19.111 } 00:20:19.111 ] 00:20:19.111 }' 00:20:19.111 11:31:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.111 11:31:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.677 11:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:19.677 [2024-07-13 11:31:54.418819] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:19.677 [2024-07-13 11:31:54.418883] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.677 [2024-07-13 11:31:54.421346] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.677 [2024-07-13 11:31:54.421395] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.677 [2024-07-13 11:31:54.421429] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.677 [2024-07-13 11:31:54.421439] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:19.934 0 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 130798 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 130798 ']' 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 130798 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130798 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130798' 00:20:19.934 killing process with pid 130798 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 130798 00:20:19.934 [2024-07-13 11:31:54.453689] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.934 11:31:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 130798 00:20:19.934 [2024-07-13 11:31:54.602811] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dH7kJu62Kr 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:20:20.870 00:20:20.870 real 0m7.672s 00:20:20.870 user 0m11.740s 00:20:20.870 sys 0m0.975s 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:20.870 11:31:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.870 ************************************ 00:20:20.870 END TEST raid_read_error_test 00:20:20.870 ************************************ 00:20:21.129 11:31:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:21.129 11:31:55 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:20:21.129 11:31:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:21.129 11:31:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:21.129 11:31:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.129 ************************************ 00:20:21.129 START TEST raid_write_error_test 00:20:21.129 ************************************ 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.yW5opPzV3J 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=130994 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 130994 /var/tmp/spdk-raid.sock 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 130994 ']' 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:21.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.129 11:31:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.129 [2024-07-13 11:31:55.714184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:21.129 [2024-07-13 11:31:55.715061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130994 ] 00:20:21.388 [2024-07-13 11:31:55.885870] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.388 [2024-07-13 11:31:56.050298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.646 [2024-07-13 11:31:56.213857] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:22.213 11:31:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.213 11:31:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:20:22.213 11:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:22.213 11:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:22.213 BaseBdev1_malloc 00:20:22.213 11:31:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:22.471 true 00:20:22.471 11:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:22.730 [2024-07-13 11:31:57.451445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:22.730 [2024-07-13 11:31:57.451550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:22.730 [2024-07-13 11:31:57.451585] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:22.730 [2024-07-13 11:31:57.451605] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:22.730 [2024-07-13 11:31:57.453892] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:22.730 [2024-07-13 11:31:57.453942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:22.730 BaseBdev1 00:20:22.730 11:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:22.730 11:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:22.991 BaseBdev2_malloc 00:20:22.991 11:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:23.248 true 00:20:23.248 11:31:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:23.506 [2024-07-13 11:31:58.061637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:23.506 [2024-07-13 11:31:58.061744] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:23.506 [2024-07-13 11:31:58.061787] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:23.506 [2024-07-13 11:31:58.061807] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:23.506 [2024-07-13 11:31:58.064091] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:23.506 [2024-07-13 11:31:58.064139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:23.506 BaseBdev2 00:20:23.506 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:23.507 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:23.765 BaseBdev3_malloc 00:20:23.765 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:23.765 true 00:20:23.765 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:24.023 [2024-07-13 11:31:58.646413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:24.023 [2024-07-13 11:31:58.646526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.023 [2024-07-13 11:31:58.646559] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:24.023 [2024-07-13 11:31:58.646584] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.023 [2024-07-13 11:31:58.648862] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.023 [2024-07-13 11:31:58.648920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:24.023 BaseBdev3 00:20:24.023 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:24.281 [2024-07-13 11:31:58.886525] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.281 [2024-07-13 11:31:58.888459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.281 [2024-07-13 11:31:58.888545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:24.281 [2024-07-13 11:31:58.888803] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:20:24.281 [2024-07-13 11:31:58.888829] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:24.281 [2024-07-13 11:31:58.888952] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:24.281 [2024-07-13 11:31:58.889321] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:20:24.281 [2024-07-13 11:31:58.889346] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:20:24.281 [2024-07-13 11:31:58.889501] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.281 11:31:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:24.539 11:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.539 "name": "raid_bdev1", 00:20:24.539 "uuid": "a3f6b96e-f9f8-4130-9b1b-2b9ea775cb2b", 00:20:24.539 "strip_size_kb": 64, 00:20:24.539 "state": "online", 00:20:24.539 "raid_level": "concat", 00:20:24.539 "superblock": true, 00:20:24.540 "num_base_bdevs": 3, 00:20:24.540 "num_base_bdevs_discovered": 3, 00:20:24.540 "num_base_bdevs_operational": 3, 00:20:24.540 "base_bdevs_list": [ 00:20:24.540 { 00:20:24.540 "name": "BaseBdev1", 00:20:24.540 "uuid": "df9ebd16-8f8d-550d-8a50-333e511fce35", 00:20:24.540 "is_configured": true, 00:20:24.540 "data_offset": 2048, 00:20:24.540 "data_size": 63488 00:20:24.540 }, 00:20:24.540 { 00:20:24.540 "name": "BaseBdev2", 00:20:24.540 "uuid": "b9e7fa70-979b-5b29-95b7-662b34dd4481", 00:20:24.540 "is_configured": true, 00:20:24.540 "data_offset": 2048, 00:20:24.540 "data_size": 63488 00:20:24.540 }, 00:20:24.540 { 00:20:24.540 "name": "BaseBdev3", 00:20:24.540 "uuid": "55b2ee54-7d20-5f18-9fd1-bf390cfd887a", 00:20:24.540 "is_configured": true, 00:20:24.540 "data_offset": 2048, 00:20:24.540 "data_size": 63488 00:20:24.540 } 00:20:24.540 ] 00:20:24.540 }' 00:20:24.540 11:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.540 11:31:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.108 11:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:25.108 11:31:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:25.367 [2024-07-13 11:31:59.859629] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:26.303 11:32:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.562 "name": "raid_bdev1", 00:20:26.562 "uuid": "a3f6b96e-f9f8-4130-9b1b-2b9ea775cb2b", 00:20:26.562 "strip_size_kb": 64, 00:20:26.562 "state": "online", 00:20:26.562 "raid_level": "concat", 00:20:26.562 "superblock": true, 00:20:26.562 "num_base_bdevs": 3, 00:20:26.562 "num_base_bdevs_discovered": 3, 00:20:26.562 "num_base_bdevs_operational": 3, 00:20:26.562 "base_bdevs_list": [ 00:20:26.562 { 00:20:26.562 "name": "BaseBdev1", 00:20:26.562 "uuid": "df9ebd16-8f8d-550d-8a50-333e511fce35", 00:20:26.562 "is_configured": true, 00:20:26.562 "data_offset": 2048, 00:20:26.562 "data_size": 63488 00:20:26.562 }, 00:20:26.562 { 00:20:26.562 "name": "BaseBdev2", 00:20:26.562 "uuid": "b9e7fa70-979b-5b29-95b7-662b34dd4481", 00:20:26.562 "is_configured": true, 00:20:26.562 "data_offset": 2048, 00:20:26.562 "data_size": 63488 00:20:26.562 }, 00:20:26.562 { 00:20:26.562 "name": "BaseBdev3", 00:20:26.562 "uuid": "55b2ee54-7d20-5f18-9fd1-bf390cfd887a", 00:20:26.562 "is_configured": true, 00:20:26.562 "data_offset": 2048, 00:20:26.562 "data_size": 63488 00:20:26.562 } 00:20:26.562 ] 00:20:26.562 }' 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.562 11:32:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.498 11:32:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:27.498 [2024-07-13 11:32:02.216742] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:27.498 [2024-07-13 11:32:02.216810] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:27.498 [2024-07-13 11:32:02.219298] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:27.498 [2024-07-13 11:32:02.219362] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:27.498 [2024-07-13 11:32:02.219405] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:27.498 [2024-07-13 11:32:02.219416] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:20:27.498 0 00:20:27.498 11:32:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 130994 00:20:27.498 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 130994 ']' 00:20:27.498 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 130994 00:20:27.498 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:20:27.498 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.498 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 130994 00:20:27.757 killing process with pid 130994 00:20:27.757 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.757 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.757 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 130994' 00:20:27.757 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 130994 00:20:27.757 11:32:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 130994 00:20:27.757 [2024-07-13 11:32:02.250520] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:27.757 [2024-07-13 11:32:02.406796] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:29.149 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.yW5opPzV3J 00:20:29.149 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:29.149 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:29.149 ************************************ 00:20:29.149 END TEST raid_write_error_test 00:20:29.149 ************************************ 00:20:29.149 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:20:29.149 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:20:29.149 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:29.149 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:29.149 11:32:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:20:29.149 00:20:29.149 real 0m7.840s 00:20:29.150 user 0m12.078s 00:20:29.150 sys 0m0.834s 00:20:29.150 11:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:29.150 11:32:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.150 11:32:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:29.150 11:32:03 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:20:29.150 11:32:03 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:20:29.150 11:32:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:29.150 11:32:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:29.150 11:32:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:29.150 ************************************ 00:20:29.150 START TEST raid_state_function_test 00:20:29.150 ************************************ 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=131210 00:20:29.150 Process raid pid: 131210 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 131210' 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 131210 /var/tmp/spdk-raid.sock 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 131210 ']' 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:29.150 11:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:29.150 [2024-07-13 11:32:03.596166] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:29.150 [2024-07-13 11:32:03.596524] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.150 [2024-07-13 11:32:03.748719] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.408 [2024-07-13 11:32:03.932578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.408 [2024-07-13 11:32:04.120133] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:29.975 11:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.975 11:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:20:29.975 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:30.234 [2024-07-13 11:32:04.737351] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:30.234 [2024-07-13 11:32:04.737558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:30.234 [2024-07-13 11:32:04.737661] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:30.234 [2024-07-13 11:32:04.737723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:30.234 [2024-07-13 11:32:04.737808] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:30.234 [2024-07-13 11:32:04.737921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.234 "name": "Existed_Raid", 00:20:30.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.234 "strip_size_kb": 0, 00:20:30.234 "state": "configuring", 00:20:30.234 "raid_level": "raid1", 00:20:30.234 "superblock": false, 00:20:30.234 "num_base_bdevs": 3, 00:20:30.234 "num_base_bdevs_discovered": 0, 00:20:30.234 "num_base_bdevs_operational": 3, 00:20:30.234 "base_bdevs_list": [ 00:20:30.234 { 00:20:30.234 "name": "BaseBdev1", 00:20:30.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.234 "is_configured": false, 00:20:30.234 "data_offset": 0, 00:20:30.234 "data_size": 0 00:20:30.234 }, 00:20:30.234 { 00:20:30.234 "name": "BaseBdev2", 00:20:30.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.234 "is_configured": false, 00:20:30.234 "data_offset": 0, 00:20:30.234 "data_size": 0 00:20:30.234 }, 00:20:30.234 { 00:20:30.234 "name": "BaseBdev3", 00:20:30.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.234 "is_configured": false, 00:20:30.234 "data_offset": 0, 00:20:30.234 "data_size": 0 00:20:30.234 } 00:20:30.234 ] 00:20:30.234 }' 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.234 11:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.169 11:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:31.169 [2024-07-13 11:32:05.809408] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:31.169 [2024-07-13 11:32:05.809546] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:31.169 11:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:31.428 [2024-07-13 11:32:05.993442] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:31.428 [2024-07-13 11:32:05.993607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:31.428 [2024-07-13 11:32:05.993699] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:31.428 [2024-07-13 11:32:05.993853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:31.428 [2024-07-13 11:32:05.993945] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:31.428 [2024-07-13 11:32:05.994002] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:31.428 11:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:31.686 [2024-07-13 11:32:06.215274] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:31.686 BaseBdev1 00:20:31.686 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:31.686 11:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:31.686 11:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:31.686 11:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:31.686 11:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:31.686 11:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:31.686 11:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:31.686 11:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:31.944 [ 00:20:31.944 { 00:20:31.944 "name": "BaseBdev1", 00:20:31.944 "aliases": [ 00:20:31.944 "eab2a14d-13ad-40ba-b712-016a5d674894" 00:20:31.944 ], 00:20:31.944 "product_name": "Malloc disk", 00:20:31.944 "block_size": 512, 00:20:31.944 "num_blocks": 65536, 00:20:31.944 "uuid": "eab2a14d-13ad-40ba-b712-016a5d674894", 00:20:31.944 "assigned_rate_limits": { 00:20:31.944 "rw_ios_per_sec": 0, 00:20:31.944 "rw_mbytes_per_sec": 0, 00:20:31.944 "r_mbytes_per_sec": 0, 00:20:31.944 "w_mbytes_per_sec": 0 00:20:31.944 }, 00:20:31.944 "claimed": true, 00:20:31.944 "claim_type": "exclusive_write", 00:20:31.944 "zoned": false, 00:20:31.944 "supported_io_types": { 00:20:31.944 "read": true, 00:20:31.944 "write": true, 00:20:31.944 "unmap": true, 00:20:31.944 "flush": true, 00:20:31.944 "reset": true, 00:20:31.944 "nvme_admin": false, 00:20:31.944 "nvme_io": false, 00:20:31.944 "nvme_io_md": false, 00:20:31.944 "write_zeroes": true, 00:20:31.944 "zcopy": true, 00:20:31.944 "get_zone_info": false, 00:20:31.944 "zone_management": false, 00:20:31.944 "zone_append": false, 00:20:31.944 "compare": false, 00:20:31.944 "compare_and_write": false, 00:20:31.944 "abort": true, 00:20:31.944 "seek_hole": false, 00:20:31.944 "seek_data": false, 00:20:31.944 "copy": true, 00:20:31.944 "nvme_iov_md": false 00:20:31.944 }, 00:20:31.944 "memory_domains": [ 00:20:31.944 { 00:20:31.944 "dma_device_id": "system", 00:20:31.944 "dma_device_type": 1 00:20:31.944 }, 00:20:31.944 { 00:20:31.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.944 "dma_device_type": 2 00:20:31.944 } 00:20:31.944 ], 00:20:31.944 "driver_specific": {} 00:20:31.944 } 00:20:31.944 ] 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.944 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.202 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.202 "name": "Existed_Raid", 00:20:32.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.202 "strip_size_kb": 0, 00:20:32.202 "state": "configuring", 00:20:32.202 "raid_level": "raid1", 00:20:32.202 "superblock": false, 00:20:32.202 "num_base_bdevs": 3, 00:20:32.202 "num_base_bdevs_discovered": 1, 00:20:32.202 "num_base_bdevs_operational": 3, 00:20:32.202 "base_bdevs_list": [ 00:20:32.202 { 00:20:32.202 "name": "BaseBdev1", 00:20:32.202 "uuid": "eab2a14d-13ad-40ba-b712-016a5d674894", 00:20:32.202 "is_configured": true, 00:20:32.202 "data_offset": 0, 00:20:32.202 "data_size": 65536 00:20:32.202 }, 00:20:32.202 { 00:20:32.202 "name": "BaseBdev2", 00:20:32.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.202 "is_configured": false, 00:20:32.202 "data_offset": 0, 00:20:32.202 "data_size": 0 00:20:32.202 }, 00:20:32.202 { 00:20:32.202 "name": "BaseBdev3", 00:20:32.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.202 "is_configured": false, 00:20:32.202 "data_offset": 0, 00:20:32.202 "data_size": 0 00:20:32.202 } 00:20:32.202 ] 00:20:32.202 }' 00:20:32.202 11:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.202 11:32:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:33.137 11:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:33.137 [2024-07-13 11:32:07.779731] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:33.137 [2024-07-13 11:32:07.779785] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:20:33.137 11:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:33.395 [2024-07-13 11:32:08.012016] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:33.395 [2024-07-13 11:32:08.013511] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:33.395 [2024-07-13 11:32:08.013576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:33.395 [2024-07-13 11:32:08.013588] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:33.395 [2024-07-13 11:32:08.013621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:33.395 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:33.395 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:33.395 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:33.395 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.395 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:33.396 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.396 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.396 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.396 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.396 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.396 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.396 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.396 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.396 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.653 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.653 "name": "Existed_Raid", 00:20:33.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.653 "strip_size_kb": 0, 00:20:33.653 "state": "configuring", 00:20:33.654 "raid_level": "raid1", 00:20:33.654 "superblock": false, 00:20:33.654 "num_base_bdevs": 3, 00:20:33.654 "num_base_bdevs_discovered": 1, 00:20:33.654 "num_base_bdevs_operational": 3, 00:20:33.654 "base_bdevs_list": [ 00:20:33.654 { 00:20:33.654 "name": "BaseBdev1", 00:20:33.654 "uuid": "eab2a14d-13ad-40ba-b712-016a5d674894", 00:20:33.654 "is_configured": true, 00:20:33.654 "data_offset": 0, 00:20:33.654 "data_size": 65536 00:20:33.654 }, 00:20:33.654 { 00:20:33.654 "name": "BaseBdev2", 00:20:33.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.654 "is_configured": false, 00:20:33.654 "data_offset": 0, 00:20:33.654 "data_size": 0 00:20:33.654 }, 00:20:33.654 { 00:20:33.654 "name": "BaseBdev3", 00:20:33.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.654 "is_configured": false, 00:20:33.654 "data_offset": 0, 00:20:33.654 "data_size": 0 00:20:33.654 } 00:20:33.654 ] 00:20:33.654 }' 00:20:33.654 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.654 11:32:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.219 11:32:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:34.477 [2024-07-13 11:32:09.133953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:34.477 BaseBdev2 00:20:34.477 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:34.477 11:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:34.477 11:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:34.477 11:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:34.477 11:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:34.477 11:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:34.477 11:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:34.734 11:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:34.992 [ 00:20:34.992 { 00:20:34.992 "name": "BaseBdev2", 00:20:34.992 "aliases": [ 00:20:34.992 "d72e6b2b-667e-4e0d-acab-0bd9d52e0f8e" 00:20:34.992 ], 00:20:34.992 "product_name": "Malloc disk", 00:20:34.992 "block_size": 512, 00:20:34.992 "num_blocks": 65536, 00:20:34.992 "uuid": "d72e6b2b-667e-4e0d-acab-0bd9d52e0f8e", 00:20:34.992 "assigned_rate_limits": { 00:20:34.992 "rw_ios_per_sec": 0, 00:20:34.992 "rw_mbytes_per_sec": 0, 00:20:34.992 "r_mbytes_per_sec": 0, 00:20:34.992 "w_mbytes_per_sec": 0 00:20:34.992 }, 00:20:34.992 "claimed": true, 00:20:34.992 "claim_type": "exclusive_write", 00:20:34.992 "zoned": false, 00:20:34.992 "supported_io_types": { 00:20:34.992 "read": true, 00:20:34.992 "write": true, 00:20:34.992 "unmap": true, 00:20:34.992 "flush": true, 00:20:34.992 "reset": true, 00:20:34.992 "nvme_admin": false, 00:20:34.992 "nvme_io": false, 00:20:34.992 "nvme_io_md": false, 00:20:34.992 "write_zeroes": true, 00:20:34.992 "zcopy": true, 00:20:34.992 "get_zone_info": false, 00:20:34.992 "zone_management": false, 00:20:34.992 "zone_append": false, 00:20:34.992 "compare": false, 00:20:34.992 "compare_and_write": false, 00:20:34.992 "abort": true, 00:20:34.992 "seek_hole": false, 00:20:34.992 "seek_data": false, 00:20:34.992 "copy": true, 00:20:34.992 "nvme_iov_md": false 00:20:34.992 }, 00:20:34.992 "memory_domains": [ 00:20:34.992 { 00:20:34.992 "dma_device_id": "system", 00:20:34.992 "dma_device_type": 1 00:20:34.992 }, 00:20:34.992 { 00:20:34.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.992 "dma_device_type": 2 00:20:34.992 } 00:20:34.992 ], 00:20:34.992 "driver_specific": {} 00:20:34.992 } 00:20:34.992 ] 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:34.992 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.250 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:35.250 "name": "Existed_Raid", 00:20:35.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.250 "strip_size_kb": 0, 00:20:35.250 "state": "configuring", 00:20:35.250 "raid_level": "raid1", 00:20:35.250 "superblock": false, 00:20:35.250 "num_base_bdevs": 3, 00:20:35.250 "num_base_bdevs_discovered": 2, 00:20:35.250 "num_base_bdevs_operational": 3, 00:20:35.250 "base_bdevs_list": [ 00:20:35.250 { 00:20:35.250 "name": "BaseBdev1", 00:20:35.250 "uuid": "eab2a14d-13ad-40ba-b712-016a5d674894", 00:20:35.250 "is_configured": true, 00:20:35.250 "data_offset": 0, 00:20:35.250 "data_size": 65536 00:20:35.250 }, 00:20:35.250 { 00:20:35.250 "name": "BaseBdev2", 00:20:35.250 "uuid": "d72e6b2b-667e-4e0d-acab-0bd9d52e0f8e", 00:20:35.250 "is_configured": true, 00:20:35.250 "data_offset": 0, 00:20:35.250 "data_size": 65536 00:20:35.250 }, 00:20:35.250 { 00:20:35.250 "name": "BaseBdev3", 00:20:35.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.250 "is_configured": false, 00:20:35.250 "data_offset": 0, 00:20:35.250 "data_size": 0 00:20:35.250 } 00:20:35.250 ] 00:20:35.250 }' 00:20:35.250 11:32:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:35.250 11:32:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.815 11:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:36.072 [2024-07-13 11:32:10.741430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:36.072 [2024-07-13 11:32:10.741503] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:20:36.072 [2024-07-13 11:32:10.741513] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:36.072 [2024-07-13 11:32:10.741623] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:36.072 [2024-07-13 11:32:10.741981] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:20:36.072 [2024-07-13 11:32:10.742001] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:20:36.072 [2024-07-13 11:32:10.742248] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.072 BaseBdev3 00:20:36.072 11:32:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:36.072 11:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:36.072 11:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:36.072 11:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:36.072 11:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:36.072 11:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:36.072 11:32:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:36.330 11:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:36.589 [ 00:20:36.589 { 00:20:36.589 "name": "BaseBdev3", 00:20:36.589 "aliases": [ 00:20:36.589 "13bfa85b-9fb8-4873-8b56-b25496367657" 00:20:36.589 ], 00:20:36.589 "product_name": "Malloc disk", 00:20:36.589 "block_size": 512, 00:20:36.589 "num_blocks": 65536, 00:20:36.589 "uuid": "13bfa85b-9fb8-4873-8b56-b25496367657", 00:20:36.589 "assigned_rate_limits": { 00:20:36.589 "rw_ios_per_sec": 0, 00:20:36.589 "rw_mbytes_per_sec": 0, 00:20:36.589 "r_mbytes_per_sec": 0, 00:20:36.589 "w_mbytes_per_sec": 0 00:20:36.589 }, 00:20:36.589 "claimed": true, 00:20:36.589 "claim_type": "exclusive_write", 00:20:36.589 "zoned": false, 00:20:36.589 "supported_io_types": { 00:20:36.589 "read": true, 00:20:36.589 "write": true, 00:20:36.589 "unmap": true, 00:20:36.589 "flush": true, 00:20:36.589 "reset": true, 00:20:36.589 "nvme_admin": false, 00:20:36.589 "nvme_io": false, 00:20:36.589 "nvme_io_md": false, 00:20:36.589 "write_zeroes": true, 00:20:36.589 "zcopy": true, 00:20:36.589 "get_zone_info": false, 00:20:36.589 "zone_management": false, 00:20:36.589 "zone_append": false, 00:20:36.589 "compare": false, 00:20:36.589 "compare_and_write": false, 00:20:36.589 "abort": true, 00:20:36.589 "seek_hole": false, 00:20:36.589 "seek_data": false, 00:20:36.589 "copy": true, 00:20:36.589 "nvme_iov_md": false 00:20:36.589 }, 00:20:36.589 "memory_domains": [ 00:20:36.589 { 00:20:36.589 "dma_device_id": "system", 00:20:36.589 "dma_device_type": 1 00:20:36.589 }, 00:20:36.589 { 00:20:36.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.589 "dma_device_type": 2 00:20:36.589 } 00:20:36.589 ], 00:20:36.589 "driver_specific": {} 00:20:36.589 } 00:20:36.589 ] 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.589 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:36.848 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:36.848 "name": "Existed_Raid", 00:20:36.848 "uuid": "7945e5f1-d2a1-4188-98cb-761d95064564", 00:20:36.848 "strip_size_kb": 0, 00:20:36.848 "state": "online", 00:20:36.848 "raid_level": "raid1", 00:20:36.848 "superblock": false, 00:20:36.848 "num_base_bdevs": 3, 00:20:36.848 "num_base_bdevs_discovered": 3, 00:20:36.848 "num_base_bdevs_operational": 3, 00:20:36.848 "base_bdevs_list": [ 00:20:36.848 { 00:20:36.848 "name": "BaseBdev1", 00:20:36.848 "uuid": "eab2a14d-13ad-40ba-b712-016a5d674894", 00:20:36.848 "is_configured": true, 00:20:36.848 "data_offset": 0, 00:20:36.848 "data_size": 65536 00:20:36.848 }, 00:20:36.848 { 00:20:36.848 "name": "BaseBdev2", 00:20:36.848 "uuid": "d72e6b2b-667e-4e0d-acab-0bd9d52e0f8e", 00:20:36.848 "is_configured": true, 00:20:36.848 "data_offset": 0, 00:20:36.848 "data_size": 65536 00:20:36.848 }, 00:20:36.848 { 00:20:36.848 "name": "BaseBdev3", 00:20:36.848 "uuid": "13bfa85b-9fb8-4873-8b56-b25496367657", 00:20:36.848 "is_configured": true, 00:20:36.848 "data_offset": 0, 00:20:36.848 "data_size": 65536 00:20:36.848 } 00:20:36.848 ] 00:20:36.848 }' 00:20:36.848 11:32:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:36.848 11:32:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.415 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:37.415 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:37.415 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:37.415 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:37.415 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:37.415 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:37.415 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:37.415 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:37.674 [2024-07-13 11:32:12.309955] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:37.674 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:37.674 "name": "Existed_Raid", 00:20:37.674 "aliases": [ 00:20:37.674 "7945e5f1-d2a1-4188-98cb-761d95064564" 00:20:37.674 ], 00:20:37.674 "product_name": "Raid Volume", 00:20:37.674 "block_size": 512, 00:20:37.674 "num_blocks": 65536, 00:20:37.674 "uuid": "7945e5f1-d2a1-4188-98cb-761d95064564", 00:20:37.674 "assigned_rate_limits": { 00:20:37.674 "rw_ios_per_sec": 0, 00:20:37.674 "rw_mbytes_per_sec": 0, 00:20:37.674 "r_mbytes_per_sec": 0, 00:20:37.674 "w_mbytes_per_sec": 0 00:20:37.674 }, 00:20:37.674 "claimed": false, 00:20:37.674 "zoned": false, 00:20:37.674 "supported_io_types": { 00:20:37.674 "read": true, 00:20:37.674 "write": true, 00:20:37.674 "unmap": false, 00:20:37.674 "flush": false, 00:20:37.674 "reset": true, 00:20:37.674 "nvme_admin": false, 00:20:37.674 "nvme_io": false, 00:20:37.674 "nvme_io_md": false, 00:20:37.674 "write_zeroes": true, 00:20:37.674 "zcopy": false, 00:20:37.674 "get_zone_info": false, 00:20:37.674 "zone_management": false, 00:20:37.674 "zone_append": false, 00:20:37.674 "compare": false, 00:20:37.674 "compare_and_write": false, 00:20:37.674 "abort": false, 00:20:37.674 "seek_hole": false, 00:20:37.674 "seek_data": false, 00:20:37.674 "copy": false, 00:20:37.674 "nvme_iov_md": false 00:20:37.674 }, 00:20:37.674 "memory_domains": [ 00:20:37.674 { 00:20:37.674 "dma_device_id": "system", 00:20:37.674 "dma_device_type": 1 00:20:37.674 }, 00:20:37.674 { 00:20:37.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.674 "dma_device_type": 2 00:20:37.674 }, 00:20:37.674 { 00:20:37.674 "dma_device_id": "system", 00:20:37.674 "dma_device_type": 1 00:20:37.674 }, 00:20:37.674 { 00:20:37.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.674 "dma_device_type": 2 00:20:37.674 }, 00:20:37.674 { 00:20:37.674 "dma_device_id": "system", 00:20:37.674 "dma_device_type": 1 00:20:37.674 }, 00:20:37.674 { 00:20:37.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.674 "dma_device_type": 2 00:20:37.674 } 00:20:37.674 ], 00:20:37.674 "driver_specific": { 00:20:37.674 "raid": { 00:20:37.674 "uuid": "7945e5f1-d2a1-4188-98cb-761d95064564", 00:20:37.674 "strip_size_kb": 0, 00:20:37.674 "state": "online", 00:20:37.674 "raid_level": "raid1", 00:20:37.674 "superblock": false, 00:20:37.674 "num_base_bdevs": 3, 00:20:37.674 "num_base_bdevs_discovered": 3, 00:20:37.674 "num_base_bdevs_operational": 3, 00:20:37.674 "base_bdevs_list": [ 00:20:37.674 { 00:20:37.674 "name": "BaseBdev1", 00:20:37.674 "uuid": "eab2a14d-13ad-40ba-b712-016a5d674894", 00:20:37.674 "is_configured": true, 00:20:37.674 "data_offset": 0, 00:20:37.674 "data_size": 65536 00:20:37.674 }, 00:20:37.674 { 00:20:37.674 "name": "BaseBdev2", 00:20:37.674 "uuid": "d72e6b2b-667e-4e0d-acab-0bd9d52e0f8e", 00:20:37.674 "is_configured": true, 00:20:37.674 "data_offset": 0, 00:20:37.674 "data_size": 65536 00:20:37.674 }, 00:20:37.674 { 00:20:37.674 "name": "BaseBdev3", 00:20:37.674 "uuid": "13bfa85b-9fb8-4873-8b56-b25496367657", 00:20:37.674 "is_configured": true, 00:20:37.674 "data_offset": 0, 00:20:37.674 "data_size": 65536 00:20:37.674 } 00:20:37.674 ] 00:20:37.674 } 00:20:37.674 } 00:20:37.674 }' 00:20:37.674 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:37.674 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:37.674 BaseBdev2 00:20:37.674 BaseBdev3' 00:20:37.674 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:37.674 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:37.674 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:37.933 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:37.933 "name": "BaseBdev1", 00:20:37.933 "aliases": [ 00:20:37.933 "eab2a14d-13ad-40ba-b712-016a5d674894" 00:20:37.933 ], 00:20:37.933 "product_name": "Malloc disk", 00:20:37.933 "block_size": 512, 00:20:37.933 "num_blocks": 65536, 00:20:37.933 "uuid": "eab2a14d-13ad-40ba-b712-016a5d674894", 00:20:37.933 "assigned_rate_limits": { 00:20:37.933 "rw_ios_per_sec": 0, 00:20:37.933 "rw_mbytes_per_sec": 0, 00:20:37.933 "r_mbytes_per_sec": 0, 00:20:37.933 "w_mbytes_per_sec": 0 00:20:37.933 }, 00:20:37.933 "claimed": true, 00:20:37.933 "claim_type": "exclusive_write", 00:20:37.933 "zoned": false, 00:20:37.933 "supported_io_types": { 00:20:37.933 "read": true, 00:20:37.933 "write": true, 00:20:37.933 "unmap": true, 00:20:37.933 "flush": true, 00:20:37.933 "reset": true, 00:20:37.933 "nvme_admin": false, 00:20:37.933 "nvme_io": false, 00:20:37.933 "nvme_io_md": false, 00:20:37.933 "write_zeroes": true, 00:20:37.933 "zcopy": true, 00:20:37.933 "get_zone_info": false, 00:20:37.933 "zone_management": false, 00:20:37.933 "zone_append": false, 00:20:37.933 "compare": false, 00:20:37.933 "compare_and_write": false, 00:20:37.933 "abort": true, 00:20:37.933 "seek_hole": false, 00:20:37.933 "seek_data": false, 00:20:37.933 "copy": true, 00:20:37.933 "nvme_iov_md": false 00:20:37.933 }, 00:20:37.933 "memory_domains": [ 00:20:37.933 { 00:20:37.933 "dma_device_id": "system", 00:20:37.933 "dma_device_type": 1 00:20:37.933 }, 00:20:37.933 { 00:20:37.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.933 "dma_device_type": 2 00:20:37.933 } 00:20:37.933 ], 00:20:37.933 "driver_specific": {} 00:20:37.933 }' 00:20:37.933 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.933 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:38.192 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:38.192 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:38.192 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:38.192 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:38.192 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:38.192 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:38.192 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:38.192 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:38.449 11:32:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:38.449 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:38.449 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:38.449 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:38.449 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:38.707 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:38.707 "name": "BaseBdev2", 00:20:38.707 "aliases": [ 00:20:38.707 "d72e6b2b-667e-4e0d-acab-0bd9d52e0f8e" 00:20:38.707 ], 00:20:38.707 "product_name": "Malloc disk", 00:20:38.707 "block_size": 512, 00:20:38.707 "num_blocks": 65536, 00:20:38.707 "uuid": "d72e6b2b-667e-4e0d-acab-0bd9d52e0f8e", 00:20:38.707 "assigned_rate_limits": { 00:20:38.707 "rw_ios_per_sec": 0, 00:20:38.707 "rw_mbytes_per_sec": 0, 00:20:38.707 "r_mbytes_per_sec": 0, 00:20:38.707 "w_mbytes_per_sec": 0 00:20:38.707 }, 00:20:38.707 "claimed": true, 00:20:38.707 "claim_type": "exclusive_write", 00:20:38.707 "zoned": false, 00:20:38.707 "supported_io_types": { 00:20:38.707 "read": true, 00:20:38.707 "write": true, 00:20:38.707 "unmap": true, 00:20:38.707 "flush": true, 00:20:38.707 "reset": true, 00:20:38.707 "nvme_admin": false, 00:20:38.707 "nvme_io": false, 00:20:38.707 "nvme_io_md": false, 00:20:38.707 "write_zeroes": true, 00:20:38.707 "zcopy": true, 00:20:38.707 "get_zone_info": false, 00:20:38.707 "zone_management": false, 00:20:38.707 "zone_append": false, 00:20:38.707 "compare": false, 00:20:38.707 "compare_and_write": false, 00:20:38.707 "abort": true, 00:20:38.707 "seek_hole": false, 00:20:38.707 "seek_data": false, 00:20:38.707 "copy": true, 00:20:38.707 "nvme_iov_md": false 00:20:38.707 }, 00:20:38.707 "memory_domains": [ 00:20:38.707 { 00:20:38.707 "dma_device_id": "system", 00:20:38.707 "dma_device_type": 1 00:20:38.707 }, 00:20:38.707 { 00:20:38.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:38.707 "dma_device_type": 2 00:20:38.707 } 00:20:38.707 ], 00:20:38.707 "driver_specific": {} 00:20:38.707 }' 00:20:38.707 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:38.707 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:38.707 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:38.707 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:38.707 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:38.707 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:38.707 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:38.964 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:38.964 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:38.964 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:38.964 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:38.964 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:38.964 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:38.964 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:38.964 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:39.222 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:39.222 "name": "BaseBdev3", 00:20:39.222 "aliases": [ 00:20:39.222 "13bfa85b-9fb8-4873-8b56-b25496367657" 00:20:39.222 ], 00:20:39.222 "product_name": "Malloc disk", 00:20:39.222 "block_size": 512, 00:20:39.222 "num_blocks": 65536, 00:20:39.222 "uuid": "13bfa85b-9fb8-4873-8b56-b25496367657", 00:20:39.222 "assigned_rate_limits": { 00:20:39.222 "rw_ios_per_sec": 0, 00:20:39.222 "rw_mbytes_per_sec": 0, 00:20:39.222 "r_mbytes_per_sec": 0, 00:20:39.222 "w_mbytes_per_sec": 0 00:20:39.222 }, 00:20:39.222 "claimed": true, 00:20:39.222 "claim_type": "exclusive_write", 00:20:39.222 "zoned": false, 00:20:39.222 "supported_io_types": { 00:20:39.222 "read": true, 00:20:39.222 "write": true, 00:20:39.222 "unmap": true, 00:20:39.222 "flush": true, 00:20:39.222 "reset": true, 00:20:39.222 "nvme_admin": false, 00:20:39.222 "nvme_io": false, 00:20:39.222 "nvme_io_md": false, 00:20:39.222 "write_zeroes": true, 00:20:39.222 "zcopy": true, 00:20:39.222 "get_zone_info": false, 00:20:39.222 "zone_management": false, 00:20:39.222 "zone_append": false, 00:20:39.222 "compare": false, 00:20:39.222 "compare_and_write": false, 00:20:39.222 "abort": true, 00:20:39.222 "seek_hole": false, 00:20:39.222 "seek_data": false, 00:20:39.222 "copy": true, 00:20:39.222 "nvme_iov_md": false 00:20:39.222 }, 00:20:39.222 "memory_domains": [ 00:20:39.222 { 00:20:39.222 "dma_device_id": "system", 00:20:39.222 "dma_device_type": 1 00:20:39.222 }, 00:20:39.222 { 00:20:39.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:39.222 "dma_device_type": 2 00:20:39.222 } 00:20:39.222 ], 00:20:39.222 "driver_specific": {} 00:20:39.222 }' 00:20:39.222 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.222 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:39.480 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:39.480 11:32:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.480 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:39.480 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:39.480 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.480 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:39.480 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:39.480 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.738 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:39.738 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:39.738 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:39.995 [2024-07-13 11:32:14.578121] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:39.995 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:39.996 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:39.996 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:39.996 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:39.996 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:39.996 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.253 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.253 "name": "Existed_Raid", 00:20:40.253 "uuid": "7945e5f1-d2a1-4188-98cb-761d95064564", 00:20:40.253 "strip_size_kb": 0, 00:20:40.253 "state": "online", 00:20:40.253 "raid_level": "raid1", 00:20:40.253 "superblock": false, 00:20:40.253 "num_base_bdevs": 3, 00:20:40.253 "num_base_bdevs_discovered": 2, 00:20:40.253 "num_base_bdevs_operational": 2, 00:20:40.253 "base_bdevs_list": [ 00:20:40.253 { 00:20:40.253 "name": null, 00:20:40.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.253 "is_configured": false, 00:20:40.253 "data_offset": 0, 00:20:40.253 "data_size": 65536 00:20:40.253 }, 00:20:40.253 { 00:20:40.253 "name": "BaseBdev2", 00:20:40.253 "uuid": "d72e6b2b-667e-4e0d-acab-0bd9d52e0f8e", 00:20:40.253 "is_configured": true, 00:20:40.253 "data_offset": 0, 00:20:40.253 "data_size": 65536 00:20:40.253 }, 00:20:40.253 { 00:20:40.253 "name": "BaseBdev3", 00:20:40.253 "uuid": "13bfa85b-9fb8-4873-8b56-b25496367657", 00:20:40.253 "is_configured": true, 00:20:40.253 "data_offset": 0, 00:20:40.253 "data_size": 65536 00:20:40.253 } 00:20:40.253 ] 00:20:40.253 }' 00:20:40.253 11:32:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.253 11:32:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.820 11:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:40.820 11:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:40.820 11:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.820 11:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:41.078 11:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:41.078 11:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:41.078 11:32:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:41.336 [2024-07-13 11:32:15.945522] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:41.336 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:41.336 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:41.336 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.336 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:41.595 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:41.595 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:41.595 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:41.854 [2024-07-13 11:32:16.444487] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:41.854 [2024-07-13 11:32:16.444600] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:41.854 [2024-07-13 11:32:16.508589] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.854 [2024-07-13 11:32:16.508639] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.854 [2024-07-13 11:32:16.508650] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:20:41.854 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:41.854 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:41.854 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.854 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:42.112 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:42.112 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:42.112 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:42.112 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:42.112 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:42.112 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:42.370 BaseBdev2 00:20:42.370 11:32:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:42.370 11:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:42.370 11:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:42.370 11:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:42.370 11:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:42.370 11:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:42.370 11:32:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:42.629 11:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:42.887 [ 00:20:42.887 { 00:20:42.887 "name": "BaseBdev2", 00:20:42.887 "aliases": [ 00:20:42.887 "07c95681-7896-490c-8ac1-0637431c5f26" 00:20:42.887 ], 00:20:42.887 "product_name": "Malloc disk", 00:20:42.887 "block_size": 512, 00:20:42.887 "num_blocks": 65536, 00:20:42.887 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:42.887 "assigned_rate_limits": { 00:20:42.887 "rw_ios_per_sec": 0, 00:20:42.887 "rw_mbytes_per_sec": 0, 00:20:42.887 "r_mbytes_per_sec": 0, 00:20:42.887 "w_mbytes_per_sec": 0 00:20:42.887 }, 00:20:42.887 "claimed": false, 00:20:42.887 "zoned": false, 00:20:42.887 "supported_io_types": { 00:20:42.887 "read": true, 00:20:42.887 "write": true, 00:20:42.887 "unmap": true, 00:20:42.887 "flush": true, 00:20:42.887 "reset": true, 00:20:42.887 "nvme_admin": false, 00:20:42.887 "nvme_io": false, 00:20:42.887 "nvme_io_md": false, 00:20:42.887 "write_zeroes": true, 00:20:42.887 "zcopy": true, 00:20:42.887 "get_zone_info": false, 00:20:42.887 "zone_management": false, 00:20:42.887 "zone_append": false, 00:20:42.887 "compare": false, 00:20:42.887 "compare_and_write": false, 00:20:42.887 "abort": true, 00:20:42.887 "seek_hole": false, 00:20:42.887 "seek_data": false, 00:20:42.887 "copy": true, 00:20:42.887 "nvme_iov_md": false 00:20:42.887 }, 00:20:42.888 "memory_domains": [ 00:20:42.888 { 00:20:42.888 "dma_device_id": "system", 00:20:42.888 "dma_device_type": 1 00:20:42.888 }, 00:20:42.888 { 00:20:42.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.888 "dma_device_type": 2 00:20:42.888 } 00:20:42.888 ], 00:20:42.888 "driver_specific": {} 00:20:42.888 } 00:20:42.888 ] 00:20:42.888 11:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:42.888 11:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:42.888 11:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:42.888 11:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:42.888 BaseBdev3 00:20:43.146 11:32:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:43.146 11:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:43.146 11:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:43.146 11:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:43.146 11:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:43.146 11:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:43.146 11:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:43.146 11:32:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:43.404 [ 00:20:43.404 { 00:20:43.404 "name": "BaseBdev3", 00:20:43.404 "aliases": [ 00:20:43.404 "f8ed65b0-a31c-4842-a547-866243787ffd" 00:20:43.404 ], 00:20:43.404 "product_name": "Malloc disk", 00:20:43.404 "block_size": 512, 00:20:43.404 "num_blocks": 65536, 00:20:43.404 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:43.404 "assigned_rate_limits": { 00:20:43.404 "rw_ios_per_sec": 0, 00:20:43.404 "rw_mbytes_per_sec": 0, 00:20:43.404 "r_mbytes_per_sec": 0, 00:20:43.404 "w_mbytes_per_sec": 0 00:20:43.404 }, 00:20:43.404 "claimed": false, 00:20:43.404 "zoned": false, 00:20:43.404 "supported_io_types": { 00:20:43.404 "read": true, 00:20:43.404 "write": true, 00:20:43.404 "unmap": true, 00:20:43.404 "flush": true, 00:20:43.404 "reset": true, 00:20:43.404 "nvme_admin": false, 00:20:43.404 "nvme_io": false, 00:20:43.404 "nvme_io_md": false, 00:20:43.404 "write_zeroes": true, 00:20:43.404 "zcopy": true, 00:20:43.404 "get_zone_info": false, 00:20:43.404 "zone_management": false, 00:20:43.404 "zone_append": false, 00:20:43.404 "compare": false, 00:20:43.404 "compare_and_write": false, 00:20:43.404 "abort": true, 00:20:43.404 "seek_hole": false, 00:20:43.404 "seek_data": false, 00:20:43.404 "copy": true, 00:20:43.404 "nvme_iov_md": false 00:20:43.404 }, 00:20:43.404 "memory_domains": [ 00:20:43.404 { 00:20:43.404 "dma_device_id": "system", 00:20:43.404 "dma_device_type": 1 00:20:43.404 }, 00:20:43.404 { 00:20:43.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.404 "dma_device_type": 2 00:20:43.404 } 00:20:43.404 ], 00:20:43.404 "driver_specific": {} 00:20:43.404 } 00:20:43.404 ] 00:20:43.404 11:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:43.404 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:43.404 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:43.404 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:43.663 [2024-07-13 11:32:18.288681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:43.663 [2024-07-13 11:32:18.288753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:43.663 [2024-07-13 11:32:18.288775] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.663 [2024-07-13 11:32:18.290757] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.663 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.920 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.920 "name": "Existed_Raid", 00:20:43.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.920 "strip_size_kb": 0, 00:20:43.920 "state": "configuring", 00:20:43.920 "raid_level": "raid1", 00:20:43.920 "superblock": false, 00:20:43.920 "num_base_bdevs": 3, 00:20:43.920 "num_base_bdevs_discovered": 2, 00:20:43.920 "num_base_bdevs_operational": 3, 00:20:43.920 "base_bdevs_list": [ 00:20:43.920 { 00:20:43.920 "name": "BaseBdev1", 00:20:43.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.920 "is_configured": false, 00:20:43.920 "data_offset": 0, 00:20:43.920 "data_size": 0 00:20:43.920 }, 00:20:43.920 { 00:20:43.920 "name": "BaseBdev2", 00:20:43.920 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:43.920 "is_configured": true, 00:20:43.920 "data_offset": 0, 00:20:43.920 "data_size": 65536 00:20:43.920 }, 00:20:43.920 { 00:20:43.920 "name": "BaseBdev3", 00:20:43.920 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:43.920 "is_configured": true, 00:20:43.920 "data_offset": 0, 00:20:43.920 "data_size": 65536 00:20:43.920 } 00:20:43.920 ] 00:20:43.920 }' 00:20:43.920 11:32:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.920 11:32:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.486 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:44.744 [2024-07-13 11:32:19.408884] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.744 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.002 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.002 "name": "Existed_Raid", 00:20:45.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.002 "strip_size_kb": 0, 00:20:45.002 "state": "configuring", 00:20:45.002 "raid_level": "raid1", 00:20:45.002 "superblock": false, 00:20:45.002 "num_base_bdevs": 3, 00:20:45.002 "num_base_bdevs_discovered": 1, 00:20:45.002 "num_base_bdevs_operational": 3, 00:20:45.002 "base_bdevs_list": [ 00:20:45.002 { 00:20:45.002 "name": "BaseBdev1", 00:20:45.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.002 "is_configured": false, 00:20:45.002 "data_offset": 0, 00:20:45.002 "data_size": 0 00:20:45.002 }, 00:20:45.002 { 00:20:45.002 "name": null, 00:20:45.002 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:45.002 "is_configured": false, 00:20:45.002 "data_offset": 0, 00:20:45.002 "data_size": 65536 00:20:45.002 }, 00:20:45.002 { 00:20:45.002 "name": "BaseBdev3", 00:20:45.002 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:45.002 "is_configured": true, 00:20:45.002 "data_offset": 0, 00:20:45.002 "data_size": 65536 00:20:45.002 } 00:20:45.002 ] 00:20:45.002 }' 00:20:45.002 11:32:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.002 11:32:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.938 11:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.938 11:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:45.938 11:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:45.938 11:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:46.197 [2024-07-13 11:32:20.870009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:46.197 BaseBdev1 00:20:46.197 11:32:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:46.197 11:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:46.197 11:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:46.197 11:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:46.197 11:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:46.197 11:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:46.197 11:32:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:46.456 11:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:46.714 [ 00:20:46.714 { 00:20:46.714 "name": "BaseBdev1", 00:20:46.714 "aliases": [ 00:20:46.714 "46469af5-c619-4d10-86d4-8ff90552a6b6" 00:20:46.714 ], 00:20:46.714 "product_name": "Malloc disk", 00:20:46.714 "block_size": 512, 00:20:46.714 "num_blocks": 65536, 00:20:46.714 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:46.714 "assigned_rate_limits": { 00:20:46.714 "rw_ios_per_sec": 0, 00:20:46.714 "rw_mbytes_per_sec": 0, 00:20:46.714 "r_mbytes_per_sec": 0, 00:20:46.714 "w_mbytes_per_sec": 0 00:20:46.714 }, 00:20:46.714 "claimed": true, 00:20:46.714 "claim_type": "exclusive_write", 00:20:46.714 "zoned": false, 00:20:46.715 "supported_io_types": { 00:20:46.715 "read": true, 00:20:46.715 "write": true, 00:20:46.715 "unmap": true, 00:20:46.715 "flush": true, 00:20:46.715 "reset": true, 00:20:46.715 "nvme_admin": false, 00:20:46.715 "nvme_io": false, 00:20:46.715 "nvme_io_md": false, 00:20:46.715 "write_zeroes": true, 00:20:46.715 "zcopy": true, 00:20:46.715 "get_zone_info": false, 00:20:46.715 "zone_management": false, 00:20:46.715 "zone_append": false, 00:20:46.715 "compare": false, 00:20:46.715 "compare_and_write": false, 00:20:46.715 "abort": true, 00:20:46.715 "seek_hole": false, 00:20:46.715 "seek_data": false, 00:20:46.715 "copy": true, 00:20:46.715 "nvme_iov_md": false 00:20:46.715 }, 00:20:46.715 "memory_domains": [ 00:20:46.715 { 00:20:46.715 "dma_device_id": "system", 00:20:46.715 "dma_device_type": 1 00:20:46.715 }, 00:20:46.715 { 00:20:46.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.715 "dma_device_type": 2 00:20:46.715 } 00:20:46.715 ], 00:20:46.715 "driver_specific": {} 00:20:46.715 } 00:20:46.715 ] 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.715 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.974 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.974 "name": "Existed_Raid", 00:20:46.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:46.974 "strip_size_kb": 0, 00:20:46.974 "state": "configuring", 00:20:46.974 "raid_level": "raid1", 00:20:46.974 "superblock": false, 00:20:46.974 "num_base_bdevs": 3, 00:20:46.974 "num_base_bdevs_discovered": 2, 00:20:46.974 "num_base_bdevs_operational": 3, 00:20:46.974 "base_bdevs_list": [ 00:20:46.974 { 00:20:46.974 "name": "BaseBdev1", 00:20:46.974 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:46.974 "is_configured": true, 00:20:46.974 "data_offset": 0, 00:20:46.974 "data_size": 65536 00:20:46.974 }, 00:20:46.974 { 00:20:46.974 "name": null, 00:20:46.974 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:46.974 "is_configured": false, 00:20:46.974 "data_offset": 0, 00:20:46.974 "data_size": 65536 00:20:46.974 }, 00:20:46.974 { 00:20:46.974 "name": "BaseBdev3", 00:20:46.974 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:46.974 "is_configured": true, 00:20:46.974 "data_offset": 0, 00:20:46.974 "data_size": 65536 00:20:46.974 } 00:20:46.974 ] 00:20:46.974 }' 00:20:46.974 11:32:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.974 11:32:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.541 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.541 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:47.799 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:47.799 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:48.058 [2024-07-13 11:32:22.606459] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:48.058 "name": "Existed_Raid", 00:20:48.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.058 "strip_size_kb": 0, 00:20:48.058 "state": "configuring", 00:20:48.058 "raid_level": "raid1", 00:20:48.058 "superblock": false, 00:20:48.058 "num_base_bdevs": 3, 00:20:48.058 "num_base_bdevs_discovered": 1, 00:20:48.058 "num_base_bdevs_operational": 3, 00:20:48.058 "base_bdevs_list": [ 00:20:48.058 { 00:20:48.058 "name": "BaseBdev1", 00:20:48.058 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:48.058 "is_configured": true, 00:20:48.058 "data_offset": 0, 00:20:48.058 "data_size": 65536 00:20:48.058 }, 00:20:48.058 { 00:20:48.058 "name": null, 00:20:48.058 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:48.058 "is_configured": false, 00:20:48.058 "data_offset": 0, 00:20:48.058 "data_size": 65536 00:20:48.058 }, 00:20:48.058 { 00:20:48.058 "name": null, 00:20:48.058 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:48.058 "is_configured": false, 00:20:48.058 "data_offset": 0, 00:20:48.058 "data_size": 65536 00:20:48.058 } 00:20:48.058 ] 00:20:48.058 }' 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:48.058 11:32:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.002 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:49.002 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.002 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:49.002 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:49.259 [2024-07-13 11:32:23.918598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.259 11:32:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.516 11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.516 "name": "Existed_Raid", 00:20:49.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.516 "strip_size_kb": 0, 00:20:49.516 "state": "configuring", 00:20:49.516 "raid_level": "raid1", 00:20:49.516 "superblock": false, 00:20:49.516 "num_base_bdevs": 3, 00:20:49.516 "num_base_bdevs_discovered": 2, 00:20:49.516 "num_base_bdevs_operational": 3, 00:20:49.516 "base_bdevs_list": [ 00:20:49.516 { 00:20:49.516 "name": "BaseBdev1", 00:20:49.516 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:49.516 "is_configured": true, 00:20:49.516 "data_offset": 0, 00:20:49.516 "data_size": 65536 00:20:49.516 }, 00:20:49.516 { 00:20:49.516 "name": null, 00:20:49.516 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:49.516 "is_configured": false, 00:20:49.516 "data_offset": 0, 00:20:49.516 "data_size": 65536 00:20:49.516 }, 00:20:49.516 { 00:20:49.516 "name": "BaseBdev3", 00:20:49.516 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:49.516 "is_configured": true, 00:20:49.516 "data_offset": 0, 00:20:49.516 "data_size": 65536 00:20:49.516 } 00:20:49.516 ] 00:20:49.516 }' 00:20:49.516 11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.516 11:32:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.451 11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.451 11:32:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:50.451 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:50.451 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:50.709 [2024-07-13 11:32:25.330899] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.709 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:50.968 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.968 "name": "Existed_Raid", 00:20:50.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:50.968 "strip_size_kb": 0, 00:20:50.968 "state": "configuring", 00:20:50.968 "raid_level": "raid1", 00:20:50.968 "superblock": false, 00:20:50.968 "num_base_bdevs": 3, 00:20:50.968 "num_base_bdevs_discovered": 1, 00:20:50.968 "num_base_bdevs_operational": 3, 00:20:50.968 "base_bdevs_list": [ 00:20:50.968 { 00:20:50.968 "name": null, 00:20:50.968 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:50.968 "is_configured": false, 00:20:50.968 "data_offset": 0, 00:20:50.968 "data_size": 65536 00:20:50.968 }, 00:20:50.968 { 00:20:50.968 "name": null, 00:20:50.968 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:50.968 "is_configured": false, 00:20:50.968 "data_offset": 0, 00:20:50.968 "data_size": 65536 00:20:50.968 }, 00:20:50.968 { 00:20:50.968 "name": "BaseBdev3", 00:20:50.968 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:50.968 "is_configured": true, 00:20:50.968 "data_offset": 0, 00:20:50.968 "data_size": 65536 00:20:50.968 } 00:20:50.968 ] 00:20:50.968 }' 00:20:50.968 11:32:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.968 11:32:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.535 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.535 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:51.793 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:51.793 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:52.054 [2024-07-13 11:32:26.697461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.054 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:52.325 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.325 "name": "Existed_Raid", 00:20:52.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.325 "strip_size_kb": 0, 00:20:52.325 "state": "configuring", 00:20:52.325 "raid_level": "raid1", 00:20:52.325 "superblock": false, 00:20:52.325 "num_base_bdevs": 3, 00:20:52.325 "num_base_bdevs_discovered": 2, 00:20:52.325 "num_base_bdevs_operational": 3, 00:20:52.325 "base_bdevs_list": [ 00:20:52.325 { 00:20:52.325 "name": null, 00:20:52.325 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:52.325 "is_configured": false, 00:20:52.325 "data_offset": 0, 00:20:52.325 "data_size": 65536 00:20:52.325 }, 00:20:52.325 { 00:20:52.325 "name": "BaseBdev2", 00:20:52.325 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:52.325 "is_configured": true, 00:20:52.325 "data_offset": 0, 00:20:52.325 "data_size": 65536 00:20:52.325 }, 00:20:52.325 { 00:20:52.325 "name": "BaseBdev3", 00:20:52.325 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:52.326 "is_configured": true, 00:20:52.326 "data_offset": 0, 00:20:52.326 "data_size": 65536 00:20:52.326 } 00:20:52.326 ] 00:20:52.326 }' 00:20:52.326 11:32:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.326 11:32:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.276 11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.276 11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:53.276 11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:53.276 11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.276 11:32:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:53.534 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 46469af5-c619-4d10-86d4-8ff90552a6b6 00:20:53.792 [2024-07-13 11:32:28.373866] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:53.792 [2024-07-13 11:32:28.373918] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:20:53.792 [2024-07-13 11:32:28.373927] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:53.792 [2024-07-13 11:32:28.374053] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:20:53.792 [2024-07-13 11:32:28.374379] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:20:53.792 [2024-07-13 11:32:28.374402] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:20:53.792 NewBaseBdev 00:20:53.792 [2024-07-13 11:32:28.374620] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.792 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:53.792 11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:53.792 11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:53.792 11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:53.792 11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:53.793 11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:53.793 11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.050 11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:54.050 [ 00:20:54.050 { 00:20:54.050 "name": "NewBaseBdev", 00:20:54.050 "aliases": [ 00:20:54.050 "46469af5-c619-4d10-86d4-8ff90552a6b6" 00:20:54.050 ], 00:20:54.050 "product_name": "Malloc disk", 00:20:54.050 "block_size": 512, 00:20:54.050 "num_blocks": 65536, 00:20:54.050 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:54.050 "assigned_rate_limits": { 00:20:54.050 "rw_ios_per_sec": 0, 00:20:54.050 "rw_mbytes_per_sec": 0, 00:20:54.050 "r_mbytes_per_sec": 0, 00:20:54.050 "w_mbytes_per_sec": 0 00:20:54.050 }, 00:20:54.050 "claimed": true, 00:20:54.050 "claim_type": "exclusive_write", 00:20:54.050 "zoned": false, 00:20:54.050 "supported_io_types": { 00:20:54.050 "read": true, 00:20:54.050 "write": true, 00:20:54.050 "unmap": true, 00:20:54.050 "flush": true, 00:20:54.050 "reset": true, 00:20:54.050 "nvme_admin": false, 00:20:54.050 "nvme_io": false, 00:20:54.050 "nvme_io_md": false, 00:20:54.050 "write_zeroes": true, 00:20:54.050 "zcopy": true, 00:20:54.050 "get_zone_info": false, 00:20:54.050 "zone_management": false, 00:20:54.050 "zone_append": false, 00:20:54.050 "compare": false, 00:20:54.050 "compare_and_write": false, 00:20:54.050 "abort": true, 00:20:54.050 "seek_hole": false, 00:20:54.050 "seek_data": false, 00:20:54.050 "copy": true, 00:20:54.050 "nvme_iov_md": false 00:20:54.050 }, 00:20:54.050 "memory_domains": [ 00:20:54.050 { 00:20:54.050 "dma_device_id": "system", 00:20:54.050 "dma_device_type": 1 00:20:54.050 }, 00:20:54.050 { 00:20:54.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.050 "dma_device_type": 2 00:20:54.050 } 00:20:54.050 ], 00:20:54.050 "driver_specific": {} 00:20:54.050 } 00:20:54.050 ] 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:54.309 "name": "Existed_Raid", 00:20:54.309 "uuid": "64bc99f4-8040-4d46-9545-3274af6da800", 00:20:54.309 "strip_size_kb": 0, 00:20:54.309 "state": "online", 00:20:54.309 "raid_level": "raid1", 00:20:54.309 "superblock": false, 00:20:54.309 "num_base_bdevs": 3, 00:20:54.309 "num_base_bdevs_discovered": 3, 00:20:54.309 "num_base_bdevs_operational": 3, 00:20:54.309 "base_bdevs_list": [ 00:20:54.309 { 00:20:54.309 "name": "NewBaseBdev", 00:20:54.309 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:54.309 "is_configured": true, 00:20:54.309 "data_offset": 0, 00:20:54.309 "data_size": 65536 00:20:54.309 }, 00:20:54.309 { 00:20:54.309 "name": "BaseBdev2", 00:20:54.309 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:54.309 "is_configured": true, 00:20:54.309 "data_offset": 0, 00:20:54.309 "data_size": 65536 00:20:54.309 }, 00:20:54.309 { 00:20:54.309 "name": "BaseBdev3", 00:20:54.309 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:54.309 "is_configured": true, 00:20:54.309 "data_offset": 0, 00:20:54.309 "data_size": 65536 00:20:54.309 } 00:20:54.309 ] 00:20:54.309 }' 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:54.309 11:32:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.873 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:54.873 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:54.873 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:54.873 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:54.873 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:54.873 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:54.873 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:54.873 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:55.131 [2024-07-13 11:32:29.790573] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:55.131 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:55.131 "name": "Existed_Raid", 00:20:55.131 "aliases": [ 00:20:55.131 "64bc99f4-8040-4d46-9545-3274af6da800" 00:20:55.131 ], 00:20:55.131 "product_name": "Raid Volume", 00:20:55.131 "block_size": 512, 00:20:55.131 "num_blocks": 65536, 00:20:55.131 "uuid": "64bc99f4-8040-4d46-9545-3274af6da800", 00:20:55.131 "assigned_rate_limits": { 00:20:55.131 "rw_ios_per_sec": 0, 00:20:55.131 "rw_mbytes_per_sec": 0, 00:20:55.131 "r_mbytes_per_sec": 0, 00:20:55.131 "w_mbytes_per_sec": 0 00:20:55.131 }, 00:20:55.131 "claimed": false, 00:20:55.131 "zoned": false, 00:20:55.131 "supported_io_types": { 00:20:55.131 "read": true, 00:20:55.131 "write": true, 00:20:55.131 "unmap": false, 00:20:55.131 "flush": false, 00:20:55.131 "reset": true, 00:20:55.131 "nvme_admin": false, 00:20:55.131 "nvme_io": false, 00:20:55.131 "nvme_io_md": false, 00:20:55.131 "write_zeroes": true, 00:20:55.131 "zcopy": false, 00:20:55.131 "get_zone_info": false, 00:20:55.131 "zone_management": false, 00:20:55.131 "zone_append": false, 00:20:55.131 "compare": false, 00:20:55.131 "compare_and_write": false, 00:20:55.131 "abort": false, 00:20:55.131 "seek_hole": false, 00:20:55.131 "seek_data": false, 00:20:55.131 "copy": false, 00:20:55.131 "nvme_iov_md": false 00:20:55.131 }, 00:20:55.131 "memory_domains": [ 00:20:55.131 { 00:20:55.131 "dma_device_id": "system", 00:20:55.131 "dma_device_type": 1 00:20:55.131 }, 00:20:55.131 { 00:20:55.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.131 "dma_device_type": 2 00:20:55.131 }, 00:20:55.131 { 00:20:55.131 "dma_device_id": "system", 00:20:55.131 "dma_device_type": 1 00:20:55.131 }, 00:20:55.131 { 00:20:55.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.131 "dma_device_type": 2 00:20:55.131 }, 00:20:55.131 { 00:20:55.131 "dma_device_id": "system", 00:20:55.131 "dma_device_type": 1 00:20:55.131 }, 00:20:55.131 { 00:20:55.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.131 "dma_device_type": 2 00:20:55.131 } 00:20:55.131 ], 00:20:55.131 "driver_specific": { 00:20:55.131 "raid": { 00:20:55.131 "uuid": "64bc99f4-8040-4d46-9545-3274af6da800", 00:20:55.131 "strip_size_kb": 0, 00:20:55.131 "state": "online", 00:20:55.131 "raid_level": "raid1", 00:20:55.131 "superblock": false, 00:20:55.131 "num_base_bdevs": 3, 00:20:55.131 "num_base_bdevs_discovered": 3, 00:20:55.131 "num_base_bdevs_operational": 3, 00:20:55.131 "base_bdevs_list": [ 00:20:55.131 { 00:20:55.131 "name": "NewBaseBdev", 00:20:55.131 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:55.131 "is_configured": true, 00:20:55.131 "data_offset": 0, 00:20:55.131 "data_size": 65536 00:20:55.131 }, 00:20:55.131 { 00:20:55.131 "name": "BaseBdev2", 00:20:55.131 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:55.131 "is_configured": true, 00:20:55.131 "data_offset": 0, 00:20:55.131 "data_size": 65536 00:20:55.131 }, 00:20:55.131 { 00:20:55.131 "name": "BaseBdev3", 00:20:55.131 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:55.131 "is_configured": true, 00:20:55.131 "data_offset": 0, 00:20:55.131 "data_size": 65536 00:20:55.131 } 00:20:55.131 ] 00:20:55.131 } 00:20:55.131 } 00:20:55.131 }' 00:20:55.131 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:55.131 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:55.131 BaseBdev2 00:20:55.131 BaseBdev3' 00:20:55.131 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.131 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:55.131 11:32:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:55.389 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:55.389 "name": "NewBaseBdev", 00:20:55.389 "aliases": [ 00:20:55.389 "46469af5-c619-4d10-86d4-8ff90552a6b6" 00:20:55.389 ], 00:20:55.389 "product_name": "Malloc disk", 00:20:55.389 "block_size": 512, 00:20:55.389 "num_blocks": 65536, 00:20:55.389 "uuid": "46469af5-c619-4d10-86d4-8ff90552a6b6", 00:20:55.389 "assigned_rate_limits": { 00:20:55.389 "rw_ios_per_sec": 0, 00:20:55.389 "rw_mbytes_per_sec": 0, 00:20:55.389 "r_mbytes_per_sec": 0, 00:20:55.389 "w_mbytes_per_sec": 0 00:20:55.389 }, 00:20:55.389 "claimed": true, 00:20:55.389 "claim_type": "exclusive_write", 00:20:55.389 "zoned": false, 00:20:55.389 "supported_io_types": { 00:20:55.389 "read": true, 00:20:55.389 "write": true, 00:20:55.389 "unmap": true, 00:20:55.389 "flush": true, 00:20:55.389 "reset": true, 00:20:55.389 "nvme_admin": false, 00:20:55.389 "nvme_io": false, 00:20:55.389 "nvme_io_md": false, 00:20:55.389 "write_zeroes": true, 00:20:55.389 "zcopy": true, 00:20:55.389 "get_zone_info": false, 00:20:55.389 "zone_management": false, 00:20:55.389 "zone_append": false, 00:20:55.389 "compare": false, 00:20:55.389 "compare_and_write": false, 00:20:55.389 "abort": true, 00:20:55.389 "seek_hole": false, 00:20:55.389 "seek_data": false, 00:20:55.389 "copy": true, 00:20:55.389 "nvme_iov_md": false 00:20:55.389 }, 00:20:55.389 "memory_domains": [ 00:20:55.389 { 00:20:55.389 "dma_device_id": "system", 00:20:55.389 "dma_device_type": 1 00:20:55.389 }, 00:20:55.389 { 00:20:55.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.389 "dma_device_type": 2 00:20:55.389 } 00:20:55.389 ], 00:20:55.389 "driver_specific": {} 00:20:55.389 }' 00:20:55.389 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.647 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.647 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:55.647 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.647 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.647 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:55.647 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.647 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.904 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:55.904 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.904 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.904 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:55.904 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.904 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:55.904 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.162 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.162 "name": "BaseBdev2", 00:20:56.162 "aliases": [ 00:20:56.162 "07c95681-7896-490c-8ac1-0637431c5f26" 00:20:56.162 ], 00:20:56.162 "product_name": "Malloc disk", 00:20:56.162 "block_size": 512, 00:20:56.162 "num_blocks": 65536, 00:20:56.162 "uuid": "07c95681-7896-490c-8ac1-0637431c5f26", 00:20:56.162 "assigned_rate_limits": { 00:20:56.162 "rw_ios_per_sec": 0, 00:20:56.162 "rw_mbytes_per_sec": 0, 00:20:56.162 "r_mbytes_per_sec": 0, 00:20:56.162 "w_mbytes_per_sec": 0 00:20:56.162 }, 00:20:56.162 "claimed": true, 00:20:56.162 "claim_type": "exclusive_write", 00:20:56.162 "zoned": false, 00:20:56.162 "supported_io_types": { 00:20:56.162 "read": true, 00:20:56.162 "write": true, 00:20:56.162 "unmap": true, 00:20:56.162 "flush": true, 00:20:56.162 "reset": true, 00:20:56.162 "nvme_admin": false, 00:20:56.162 "nvme_io": false, 00:20:56.162 "nvme_io_md": false, 00:20:56.162 "write_zeroes": true, 00:20:56.162 "zcopy": true, 00:20:56.162 "get_zone_info": false, 00:20:56.162 "zone_management": false, 00:20:56.162 "zone_append": false, 00:20:56.162 "compare": false, 00:20:56.162 "compare_and_write": false, 00:20:56.162 "abort": true, 00:20:56.162 "seek_hole": false, 00:20:56.162 "seek_data": false, 00:20:56.162 "copy": true, 00:20:56.162 "nvme_iov_md": false 00:20:56.162 }, 00:20:56.162 "memory_domains": [ 00:20:56.162 { 00:20:56.162 "dma_device_id": "system", 00:20:56.162 "dma_device_type": 1 00:20:56.162 }, 00:20:56.162 { 00:20:56.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.162 "dma_device_type": 2 00:20:56.162 } 00:20:56.162 ], 00:20:56.162 "driver_specific": {} 00:20:56.162 }' 00:20:56.162 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.162 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.162 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.162 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.419 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.419 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.419 11:32:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.419 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.419 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.419 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.419 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.676 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.676 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:56.676 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:56.676 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:56.933 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:56.933 "name": "BaseBdev3", 00:20:56.933 "aliases": [ 00:20:56.933 "f8ed65b0-a31c-4842-a547-866243787ffd" 00:20:56.933 ], 00:20:56.933 "product_name": "Malloc disk", 00:20:56.933 "block_size": 512, 00:20:56.933 "num_blocks": 65536, 00:20:56.933 "uuid": "f8ed65b0-a31c-4842-a547-866243787ffd", 00:20:56.933 "assigned_rate_limits": { 00:20:56.933 "rw_ios_per_sec": 0, 00:20:56.933 "rw_mbytes_per_sec": 0, 00:20:56.933 "r_mbytes_per_sec": 0, 00:20:56.933 "w_mbytes_per_sec": 0 00:20:56.933 }, 00:20:56.933 "claimed": true, 00:20:56.933 "claim_type": "exclusive_write", 00:20:56.933 "zoned": false, 00:20:56.933 "supported_io_types": { 00:20:56.933 "read": true, 00:20:56.933 "write": true, 00:20:56.933 "unmap": true, 00:20:56.933 "flush": true, 00:20:56.933 "reset": true, 00:20:56.933 "nvme_admin": false, 00:20:56.933 "nvme_io": false, 00:20:56.933 "nvme_io_md": false, 00:20:56.933 "write_zeroes": true, 00:20:56.933 "zcopy": true, 00:20:56.933 "get_zone_info": false, 00:20:56.933 "zone_management": false, 00:20:56.933 "zone_append": false, 00:20:56.933 "compare": false, 00:20:56.933 "compare_and_write": false, 00:20:56.933 "abort": true, 00:20:56.933 "seek_hole": false, 00:20:56.933 "seek_data": false, 00:20:56.933 "copy": true, 00:20:56.933 "nvme_iov_md": false 00:20:56.933 }, 00:20:56.933 "memory_domains": [ 00:20:56.933 { 00:20:56.933 "dma_device_id": "system", 00:20:56.933 "dma_device_type": 1 00:20:56.933 }, 00:20:56.933 { 00:20:56.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:56.933 "dma_device_type": 2 00:20:56.933 } 00:20:56.933 ], 00:20:56.933 "driver_specific": {} 00:20:56.933 }' 00:20:56.933 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.933 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.933 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.933 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.933 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:57.190 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:57.190 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.190 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:57.190 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:57.190 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.190 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:57.448 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:57.448 11:32:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:57.705 [2024-07-13 11:32:32.202690] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:57.705 [2024-07-13 11:32:32.202720] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:57.705 [2024-07-13 11:32:32.202787] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:57.705 [2024-07-13 11:32:32.203074] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:57.705 [2024-07-13 11:32:32.203094] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:20:57.705 11:32:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 131210 00:20:57.705 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 131210 ']' 00:20:57.705 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 131210 00:20:57.705 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:20:57.705 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.705 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 131210 00:20:57.705 killing process with pid 131210 00:20:57.705 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:57.705 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:57.706 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 131210' 00:20:57.706 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 131210 00:20:57.706 11:32:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 131210 00:20:57.706 [2024-07-13 11:32:32.240225] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:57.706 [2024-07-13 11:32:32.427562] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.637 ************************************ 00:20:58.637 END TEST raid_state_function_test 00:20:58.637 ************************************ 00:20:58.637 11:32:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:58.637 00:20:58.637 real 0m29.809s 00:20:58.637 user 0m56.482s 00:20:58.637 sys 0m2.912s 00:20:58.637 11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:58.637 11:32:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.896 11:32:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:58.896 11:32:33 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:20:58.896 11:32:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:58.896 11:32:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.896 11:32:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.896 ************************************ 00:20:58.896 START TEST raid_state_function_test_sb 00:20:58.896 ************************************ 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=132231 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132231' 00:20:58.896 Process raid pid: 132231 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 132231 /var/tmp/spdk-raid.sock 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 132231 ']' 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:58.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.896 11:32:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.896 [2024-07-13 11:32:33.483919] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:20:58.896 [2024-07-13 11:32:33.484153] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.160 [2024-07-13 11:32:33.651483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.161 [2024-07-13 11:32:33.836624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.422 [2024-07-13 11:32:34.024499] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:59.988 [2024-07-13 11:32:34.606380] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:59.988 [2024-07-13 11:32:34.606472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:59.988 [2024-07-13 11:32:34.606487] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:59.988 [2024-07-13 11:32:34.606515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:59.988 [2024-07-13 11:32:34.606523] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:59.988 [2024-07-13 11:32:34.606539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.988 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.989 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.989 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.989 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.989 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.248 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.248 "name": "Existed_Raid", 00:21:00.248 "uuid": "2b41ae3e-1502-4a01-86d8-486f59924d24", 00:21:00.248 "strip_size_kb": 0, 00:21:00.248 "state": "configuring", 00:21:00.248 "raid_level": "raid1", 00:21:00.248 "superblock": true, 00:21:00.248 "num_base_bdevs": 3, 00:21:00.248 "num_base_bdevs_discovered": 0, 00:21:00.248 "num_base_bdevs_operational": 3, 00:21:00.248 "base_bdevs_list": [ 00:21:00.248 { 00:21:00.248 "name": "BaseBdev1", 00:21:00.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.248 "is_configured": false, 00:21:00.248 "data_offset": 0, 00:21:00.248 "data_size": 0 00:21:00.248 }, 00:21:00.248 { 00:21:00.248 "name": "BaseBdev2", 00:21:00.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.248 "is_configured": false, 00:21:00.248 "data_offset": 0, 00:21:00.248 "data_size": 0 00:21:00.248 }, 00:21:00.248 { 00:21:00.248 "name": "BaseBdev3", 00:21:00.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.248 "is_configured": false, 00:21:00.248 "data_offset": 0, 00:21:00.248 "data_size": 0 00:21:00.248 } 00:21:00.248 ] 00:21:00.248 }' 00:21:00.248 11:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.248 11:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.815 11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:01.073 [2024-07-13 11:32:35.666401] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:01.073 [2024-07-13 11:32:35.666437] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:01.073 11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:01.331 [2024-07-13 11:32:35.922465] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:01.331 [2024-07-13 11:32:35.922518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:01.331 [2024-07-13 11:32:35.922531] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:01.331 [2024-07-13 11:32:35.922547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:01.331 [2024-07-13 11:32:35.922554] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:01.331 [2024-07-13 11:32:35.922573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:01.331 11:32:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:01.590 [2024-07-13 11:32:36.208011] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:01.590 BaseBdev1 00:21:01.590 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:01.590 11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:01.590 11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:01.590 11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:01.590 11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:01.590 11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:01.590 11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:01.848 11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:01.848 [ 00:21:01.848 { 00:21:01.848 "name": "BaseBdev1", 00:21:01.848 "aliases": [ 00:21:01.848 "05ae9e60-7986-4834-b315-7fbe373d7371" 00:21:01.848 ], 00:21:01.848 "product_name": "Malloc disk", 00:21:01.848 "block_size": 512, 00:21:01.848 "num_blocks": 65536, 00:21:01.848 "uuid": "05ae9e60-7986-4834-b315-7fbe373d7371", 00:21:01.848 "assigned_rate_limits": { 00:21:01.848 "rw_ios_per_sec": 0, 00:21:01.848 "rw_mbytes_per_sec": 0, 00:21:01.848 "r_mbytes_per_sec": 0, 00:21:01.848 "w_mbytes_per_sec": 0 00:21:01.848 }, 00:21:01.848 "claimed": true, 00:21:01.848 "claim_type": "exclusive_write", 00:21:01.848 "zoned": false, 00:21:01.848 "supported_io_types": { 00:21:01.848 "read": true, 00:21:01.848 "write": true, 00:21:01.848 "unmap": true, 00:21:01.848 "flush": true, 00:21:01.848 "reset": true, 00:21:01.848 "nvme_admin": false, 00:21:01.848 "nvme_io": false, 00:21:01.848 "nvme_io_md": false, 00:21:01.848 "write_zeroes": true, 00:21:01.848 "zcopy": true, 00:21:01.848 "get_zone_info": false, 00:21:01.848 "zone_management": false, 00:21:01.848 "zone_append": false, 00:21:01.848 "compare": false, 00:21:01.848 "compare_and_write": false, 00:21:01.848 "abort": true, 00:21:01.848 "seek_hole": false, 00:21:01.848 "seek_data": false, 00:21:01.848 "copy": true, 00:21:01.848 "nvme_iov_md": false 00:21:01.848 }, 00:21:01.848 "memory_domains": [ 00:21:01.848 { 00:21:01.848 "dma_device_id": "system", 00:21:01.848 "dma_device_type": 1 00:21:01.848 }, 00:21:01.848 { 00:21:01.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:01.848 "dma_device_type": 2 00:21:01.848 } 00:21:01.848 ], 00:21:01.848 "driver_specific": {} 00:21:01.848 } 00:21:01.848 ] 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.107 "name": "Existed_Raid", 00:21:02.107 "uuid": "78651dd8-d525-4028-a5fd-e88c43063516", 00:21:02.107 "strip_size_kb": 0, 00:21:02.107 "state": "configuring", 00:21:02.107 "raid_level": "raid1", 00:21:02.107 "superblock": true, 00:21:02.107 "num_base_bdevs": 3, 00:21:02.107 "num_base_bdevs_discovered": 1, 00:21:02.107 "num_base_bdevs_operational": 3, 00:21:02.107 "base_bdevs_list": [ 00:21:02.107 { 00:21:02.107 "name": "BaseBdev1", 00:21:02.107 "uuid": "05ae9e60-7986-4834-b315-7fbe373d7371", 00:21:02.107 "is_configured": true, 00:21:02.107 "data_offset": 2048, 00:21:02.107 "data_size": 63488 00:21:02.107 }, 00:21:02.107 { 00:21:02.107 "name": "BaseBdev2", 00:21:02.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.107 "is_configured": false, 00:21:02.107 "data_offset": 0, 00:21:02.107 "data_size": 0 00:21:02.107 }, 00:21:02.107 { 00:21:02.107 "name": "BaseBdev3", 00:21:02.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.107 "is_configured": false, 00:21:02.107 "data_offset": 0, 00:21:02.107 "data_size": 0 00:21:02.107 } 00:21:02.107 ] 00:21:02.107 }' 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.107 11:32:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.674 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:02.933 [2024-07-13 11:32:37.636311] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:02.933 [2024-07-13 11:32:37.636348] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:21:02.933 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:03.192 [2024-07-13 11:32:37.820372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:03.192 [2024-07-13 11:32:37.822169] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:03.192 [2024-07-13 11:32:37.822245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:03.192 [2024-07-13 11:32:37.822258] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:03.192 [2024-07-13 11:32:37.822302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.192 11:32:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.451 11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.451 "name": "Existed_Raid", 00:21:03.451 "uuid": "846db8c5-4738-4217-b801-ca2d445c0985", 00:21:03.451 "strip_size_kb": 0, 00:21:03.451 "state": "configuring", 00:21:03.451 "raid_level": "raid1", 00:21:03.451 "superblock": true, 00:21:03.451 "num_base_bdevs": 3, 00:21:03.451 "num_base_bdevs_discovered": 1, 00:21:03.451 "num_base_bdevs_operational": 3, 00:21:03.451 "base_bdevs_list": [ 00:21:03.451 { 00:21:03.451 "name": "BaseBdev1", 00:21:03.451 "uuid": "05ae9e60-7986-4834-b315-7fbe373d7371", 00:21:03.451 "is_configured": true, 00:21:03.451 "data_offset": 2048, 00:21:03.451 "data_size": 63488 00:21:03.451 }, 00:21:03.451 { 00:21:03.451 "name": "BaseBdev2", 00:21:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.451 "is_configured": false, 00:21:03.451 "data_offset": 0, 00:21:03.451 "data_size": 0 00:21:03.451 }, 00:21:03.451 { 00:21:03.451 "name": "BaseBdev3", 00:21:03.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.451 "is_configured": false, 00:21:03.451 "data_offset": 0, 00:21:03.451 "data_size": 0 00:21:03.451 } 00:21:03.451 ] 00:21:03.451 }' 00:21:03.451 11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.451 11:32:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.019 11:32:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:04.277 [2024-07-13 11:32:38.993696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:04.277 BaseBdev2 00:21:04.277 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:04.277 11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:04.277 11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:04.277 11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:04.277 11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:04.277 11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:04.277 11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.535 11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:04.794 [ 00:21:04.794 { 00:21:04.794 "name": "BaseBdev2", 00:21:04.794 "aliases": [ 00:21:04.794 "af2415e1-4b8c-43e8-8295-384db9aa3afe" 00:21:04.794 ], 00:21:04.794 "product_name": "Malloc disk", 00:21:04.794 "block_size": 512, 00:21:04.794 "num_blocks": 65536, 00:21:04.794 "uuid": "af2415e1-4b8c-43e8-8295-384db9aa3afe", 00:21:04.794 "assigned_rate_limits": { 00:21:04.794 "rw_ios_per_sec": 0, 00:21:04.794 "rw_mbytes_per_sec": 0, 00:21:04.794 "r_mbytes_per_sec": 0, 00:21:04.794 "w_mbytes_per_sec": 0 00:21:04.794 }, 00:21:04.794 "claimed": true, 00:21:04.794 "claim_type": "exclusive_write", 00:21:04.794 "zoned": false, 00:21:04.794 "supported_io_types": { 00:21:04.794 "read": true, 00:21:04.794 "write": true, 00:21:04.794 "unmap": true, 00:21:04.794 "flush": true, 00:21:04.794 "reset": true, 00:21:04.794 "nvme_admin": false, 00:21:04.794 "nvme_io": false, 00:21:04.794 "nvme_io_md": false, 00:21:04.794 "write_zeroes": true, 00:21:04.794 "zcopy": true, 00:21:04.794 "get_zone_info": false, 00:21:04.794 "zone_management": false, 00:21:04.794 "zone_append": false, 00:21:04.794 "compare": false, 00:21:04.794 "compare_and_write": false, 00:21:04.794 "abort": true, 00:21:04.794 "seek_hole": false, 00:21:04.794 "seek_data": false, 00:21:04.794 "copy": true, 00:21:04.794 "nvme_iov_md": false 00:21:04.794 }, 00:21:04.794 "memory_domains": [ 00:21:04.794 { 00:21:04.794 "dma_device_id": "system", 00:21:04.794 "dma_device_type": 1 00:21:04.794 }, 00:21:04.794 { 00:21:04.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.794 "dma_device_type": 2 00:21:04.794 } 00:21:04.794 ], 00:21:04.794 "driver_specific": {} 00:21:04.794 } 00:21:04.794 ] 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.794 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.795 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.053 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.053 "name": "Existed_Raid", 00:21:05.053 "uuid": "846db8c5-4738-4217-b801-ca2d445c0985", 00:21:05.053 "strip_size_kb": 0, 00:21:05.053 "state": "configuring", 00:21:05.053 "raid_level": "raid1", 00:21:05.053 "superblock": true, 00:21:05.053 "num_base_bdevs": 3, 00:21:05.053 "num_base_bdevs_discovered": 2, 00:21:05.053 "num_base_bdevs_operational": 3, 00:21:05.053 "base_bdevs_list": [ 00:21:05.053 { 00:21:05.053 "name": "BaseBdev1", 00:21:05.053 "uuid": "05ae9e60-7986-4834-b315-7fbe373d7371", 00:21:05.053 "is_configured": true, 00:21:05.053 "data_offset": 2048, 00:21:05.053 "data_size": 63488 00:21:05.053 }, 00:21:05.053 { 00:21:05.053 "name": "BaseBdev2", 00:21:05.053 "uuid": "af2415e1-4b8c-43e8-8295-384db9aa3afe", 00:21:05.053 "is_configured": true, 00:21:05.053 "data_offset": 2048, 00:21:05.053 "data_size": 63488 00:21:05.053 }, 00:21:05.053 { 00:21:05.053 "name": "BaseBdev3", 00:21:05.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.053 "is_configured": false, 00:21:05.053 "data_offset": 0, 00:21:05.053 "data_size": 0 00:21:05.053 } 00:21:05.053 ] 00:21:05.053 }' 00:21:05.054 11:32:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.054 11:32:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.621 11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:05.880 [2024-07-13 11:32:40.613429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.880 [2024-07-13 11:32:40.613678] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:21:05.880 [2024-07-13 11:32:40.613693] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:05.880 [2024-07-13 11:32:40.613817] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:05.880 BaseBdev3 00:21:05.880 [2024-07-13 11:32:40.614170] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:21:05.880 [2024-07-13 11:32:40.614191] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:21:05.880 [2024-07-13 11:32:40.614350] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.880 11:32:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:05.880 11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:05.880 11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:05.880 11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:05.880 11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:05.880 11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:05.880 11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:06.139 11:32:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:06.397 [ 00:21:06.397 { 00:21:06.397 "name": "BaseBdev3", 00:21:06.397 "aliases": [ 00:21:06.397 "2e72a996-54b4-4720-b3aa-d3b193cdc08a" 00:21:06.397 ], 00:21:06.397 "product_name": "Malloc disk", 00:21:06.397 "block_size": 512, 00:21:06.397 "num_blocks": 65536, 00:21:06.397 "uuid": "2e72a996-54b4-4720-b3aa-d3b193cdc08a", 00:21:06.397 "assigned_rate_limits": { 00:21:06.397 "rw_ios_per_sec": 0, 00:21:06.397 "rw_mbytes_per_sec": 0, 00:21:06.397 "r_mbytes_per_sec": 0, 00:21:06.397 "w_mbytes_per_sec": 0 00:21:06.397 }, 00:21:06.397 "claimed": true, 00:21:06.397 "claim_type": "exclusive_write", 00:21:06.397 "zoned": false, 00:21:06.397 "supported_io_types": { 00:21:06.397 "read": true, 00:21:06.398 "write": true, 00:21:06.398 "unmap": true, 00:21:06.398 "flush": true, 00:21:06.398 "reset": true, 00:21:06.398 "nvme_admin": false, 00:21:06.398 "nvme_io": false, 00:21:06.398 "nvme_io_md": false, 00:21:06.398 "write_zeroes": true, 00:21:06.398 "zcopy": true, 00:21:06.398 "get_zone_info": false, 00:21:06.398 "zone_management": false, 00:21:06.398 "zone_append": false, 00:21:06.398 "compare": false, 00:21:06.398 "compare_and_write": false, 00:21:06.398 "abort": true, 00:21:06.398 "seek_hole": false, 00:21:06.398 "seek_data": false, 00:21:06.398 "copy": true, 00:21:06.398 "nvme_iov_md": false 00:21:06.398 }, 00:21:06.398 "memory_domains": [ 00:21:06.398 { 00:21:06.398 "dma_device_id": "system", 00:21:06.398 "dma_device_type": 1 00:21:06.398 }, 00:21:06.398 { 00:21:06.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.398 "dma_device_type": 2 00:21:06.398 } 00:21:06.398 ], 00:21:06.398 "driver_specific": {} 00:21:06.398 } 00:21:06.398 ] 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.398 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.656 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.656 "name": "Existed_Raid", 00:21:06.656 "uuid": "846db8c5-4738-4217-b801-ca2d445c0985", 00:21:06.656 "strip_size_kb": 0, 00:21:06.656 "state": "online", 00:21:06.656 "raid_level": "raid1", 00:21:06.656 "superblock": true, 00:21:06.656 "num_base_bdevs": 3, 00:21:06.656 "num_base_bdevs_discovered": 3, 00:21:06.656 "num_base_bdevs_operational": 3, 00:21:06.656 "base_bdevs_list": [ 00:21:06.656 { 00:21:06.656 "name": "BaseBdev1", 00:21:06.656 "uuid": "05ae9e60-7986-4834-b315-7fbe373d7371", 00:21:06.656 "is_configured": true, 00:21:06.656 "data_offset": 2048, 00:21:06.656 "data_size": 63488 00:21:06.656 }, 00:21:06.656 { 00:21:06.656 "name": "BaseBdev2", 00:21:06.656 "uuid": "af2415e1-4b8c-43e8-8295-384db9aa3afe", 00:21:06.656 "is_configured": true, 00:21:06.656 "data_offset": 2048, 00:21:06.656 "data_size": 63488 00:21:06.656 }, 00:21:06.656 { 00:21:06.656 "name": "BaseBdev3", 00:21:06.656 "uuid": "2e72a996-54b4-4720-b3aa-d3b193cdc08a", 00:21:06.656 "is_configured": true, 00:21:06.656 "data_offset": 2048, 00:21:06.656 "data_size": 63488 00:21:06.656 } 00:21:06.656 ] 00:21:06.656 }' 00:21:06.656 11:32:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.656 11:32:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:07.592 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:07.592 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:07.592 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:07.592 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:07.592 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:07.592 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:07.592 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:07.592 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:07.592 [2024-07-13 11:32:42.185928] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:07.592 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:07.592 "name": "Existed_Raid", 00:21:07.592 "aliases": [ 00:21:07.592 "846db8c5-4738-4217-b801-ca2d445c0985" 00:21:07.592 ], 00:21:07.592 "product_name": "Raid Volume", 00:21:07.592 "block_size": 512, 00:21:07.592 "num_blocks": 63488, 00:21:07.592 "uuid": "846db8c5-4738-4217-b801-ca2d445c0985", 00:21:07.592 "assigned_rate_limits": { 00:21:07.592 "rw_ios_per_sec": 0, 00:21:07.592 "rw_mbytes_per_sec": 0, 00:21:07.592 "r_mbytes_per_sec": 0, 00:21:07.592 "w_mbytes_per_sec": 0 00:21:07.592 }, 00:21:07.592 "claimed": false, 00:21:07.592 "zoned": false, 00:21:07.592 "supported_io_types": { 00:21:07.592 "read": true, 00:21:07.593 "write": true, 00:21:07.593 "unmap": false, 00:21:07.593 "flush": false, 00:21:07.593 "reset": true, 00:21:07.593 "nvme_admin": false, 00:21:07.593 "nvme_io": false, 00:21:07.593 "nvme_io_md": false, 00:21:07.593 "write_zeroes": true, 00:21:07.593 "zcopy": false, 00:21:07.593 "get_zone_info": false, 00:21:07.593 "zone_management": false, 00:21:07.593 "zone_append": false, 00:21:07.593 "compare": false, 00:21:07.593 "compare_and_write": false, 00:21:07.593 "abort": false, 00:21:07.593 "seek_hole": false, 00:21:07.593 "seek_data": false, 00:21:07.593 "copy": false, 00:21:07.593 "nvme_iov_md": false 00:21:07.593 }, 00:21:07.593 "memory_domains": [ 00:21:07.593 { 00:21:07.593 "dma_device_id": "system", 00:21:07.593 "dma_device_type": 1 00:21:07.593 }, 00:21:07.593 { 00:21:07.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.593 "dma_device_type": 2 00:21:07.593 }, 00:21:07.593 { 00:21:07.593 "dma_device_id": "system", 00:21:07.593 "dma_device_type": 1 00:21:07.593 }, 00:21:07.593 { 00:21:07.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.593 "dma_device_type": 2 00:21:07.593 }, 00:21:07.593 { 00:21:07.593 "dma_device_id": "system", 00:21:07.593 "dma_device_type": 1 00:21:07.593 }, 00:21:07.593 { 00:21:07.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.593 "dma_device_type": 2 00:21:07.593 } 00:21:07.593 ], 00:21:07.593 "driver_specific": { 00:21:07.593 "raid": { 00:21:07.593 "uuid": "846db8c5-4738-4217-b801-ca2d445c0985", 00:21:07.593 "strip_size_kb": 0, 00:21:07.593 "state": "online", 00:21:07.593 "raid_level": "raid1", 00:21:07.593 "superblock": true, 00:21:07.593 "num_base_bdevs": 3, 00:21:07.593 "num_base_bdevs_discovered": 3, 00:21:07.593 "num_base_bdevs_operational": 3, 00:21:07.593 "base_bdevs_list": [ 00:21:07.593 { 00:21:07.593 "name": "BaseBdev1", 00:21:07.593 "uuid": "05ae9e60-7986-4834-b315-7fbe373d7371", 00:21:07.593 "is_configured": true, 00:21:07.593 "data_offset": 2048, 00:21:07.593 "data_size": 63488 00:21:07.593 }, 00:21:07.593 { 00:21:07.593 "name": "BaseBdev2", 00:21:07.593 "uuid": "af2415e1-4b8c-43e8-8295-384db9aa3afe", 00:21:07.593 "is_configured": true, 00:21:07.593 "data_offset": 2048, 00:21:07.593 "data_size": 63488 00:21:07.593 }, 00:21:07.593 { 00:21:07.593 "name": "BaseBdev3", 00:21:07.593 "uuid": "2e72a996-54b4-4720-b3aa-d3b193cdc08a", 00:21:07.593 "is_configured": true, 00:21:07.593 "data_offset": 2048, 00:21:07.593 "data_size": 63488 00:21:07.593 } 00:21:07.593 ] 00:21:07.593 } 00:21:07.593 } 00:21:07.593 }' 00:21:07.593 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:07.593 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:07.593 BaseBdev2 00:21:07.593 BaseBdev3' 00:21:07.593 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:07.593 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:07.593 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:07.850 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:07.850 "name": "BaseBdev1", 00:21:07.850 "aliases": [ 00:21:07.850 "05ae9e60-7986-4834-b315-7fbe373d7371" 00:21:07.850 ], 00:21:07.850 "product_name": "Malloc disk", 00:21:07.850 "block_size": 512, 00:21:07.850 "num_blocks": 65536, 00:21:07.850 "uuid": "05ae9e60-7986-4834-b315-7fbe373d7371", 00:21:07.850 "assigned_rate_limits": { 00:21:07.850 "rw_ios_per_sec": 0, 00:21:07.850 "rw_mbytes_per_sec": 0, 00:21:07.850 "r_mbytes_per_sec": 0, 00:21:07.850 "w_mbytes_per_sec": 0 00:21:07.850 }, 00:21:07.850 "claimed": true, 00:21:07.850 "claim_type": "exclusive_write", 00:21:07.850 "zoned": false, 00:21:07.850 "supported_io_types": { 00:21:07.850 "read": true, 00:21:07.850 "write": true, 00:21:07.850 "unmap": true, 00:21:07.850 "flush": true, 00:21:07.850 "reset": true, 00:21:07.850 "nvme_admin": false, 00:21:07.850 "nvme_io": false, 00:21:07.850 "nvme_io_md": false, 00:21:07.850 "write_zeroes": true, 00:21:07.850 "zcopy": true, 00:21:07.850 "get_zone_info": false, 00:21:07.850 "zone_management": false, 00:21:07.850 "zone_append": false, 00:21:07.850 "compare": false, 00:21:07.850 "compare_and_write": false, 00:21:07.850 "abort": true, 00:21:07.850 "seek_hole": false, 00:21:07.850 "seek_data": false, 00:21:07.850 "copy": true, 00:21:07.850 "nvme_iov_md": false 00:21:07.850 }, 00:21:07.850 "memory_domains": [ 00:21:07.850 { 00:21:07.850 "dma_device_id": "system", 00:21:07.850 "dma_device_type": 1 00:21:07.850 }, 00:21:07.850 { 00:21:07.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.850 "dma_device_type": 2 00:21:07.850 } 00:21:07.850 ], 00:21:07.850 "driver_specific": {} 00:21:07.850 }' 00:21:07.850 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:07.850 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:07.850 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:07.850 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:07.850 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:08.108 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:08.365 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:08.365 "name": "BaseBdev2", 00:21:08.365 "aliases": [ 00:21:08.365 "af2415e1-4b8c-43e8-8295-384db9aa3afe" 00:21:08.365 ], 00:21:08.365 "product_name": "Malloc disk", 00:21:08.366 "block_size": 512, 00:21:08.366 "num_blocks": 65536, 00:21:08.366 "uuid": "af2415e1-4b8c-43e8-8295-384db9aa3afe", 00:21:08.366 "assigned_rate_limits": { 00:21:08.366 "rw_ios_per_sec": 0, 00:21:08.366 "rw_mbytes_per_sec": 0, 00:21:08.366 "r_mbytes_per_sec": 0, 00:21:08.366 "w_mbytes_per_sec": 0 00:21:08.366 }, 00:21:08.366 "claimed": true, 00:21:08.366 "claim_type": "exclusive_write", 00:21:08.366 "zoned": false, 00:21:08.366 "supported_io_types": { 00:21:08.366 "read": true, 00:21:08.366 "write": true, 00:21:08.366 "unmap": true, 00:21:08.366 "flush": true, 00:21:08.366 "reset": true, 00:21:08.366 "nvme_admin": false, 00:21:08.366 "nvme_io": false, 00:21:08.366 "nvme_io_md": false, 00:21:08.366 "write_zeroes": true, 00:21:08.366 "zcopy": true, 00:21:08.366 "get_zone_info": false, 00:21:08.366 "zone_management": false, 00:21:08.366 "zone_append": false, 00:21:08.366 "compare": false, 00:21:08.366 "compare_and_write": false, 00:21:08.366 "abort": true, 00:21:08.366 "seek_hole": false, 00:21:08.366 "seek_data": false, 00:21:08.366 "copy": true, 00:21:08.366 "nvme_iov_md": false 00:21:08.366 }, 00:21:08.366 "memory_domains": [ 00:21:08.366 { 00:21:08.366 "dma_device_id": "system", 00:21:08.366 "dma_device_type": 1 00:21:08.366 }, 00:21:08.366 { 00:21:08.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.366 "dma_device_type": 2 00:21:08.366 } 00:21:08.366 ], 00:21:08.366 "driver_specific": {} 00:21:08.366 }' 00:21:08.366 11:32:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.366 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.366 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:08.366 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.624 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:08.624 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:08.624 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.624 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:08.624 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:08.624 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:08.624 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:08.624 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:08.624 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:08.882 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:08.882 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:08.882 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:08.882 "name": "BaseBdev3", 00:21:08.882 "aliases": [ 00:21:08.882 "2e72a996-54b4-4720-b3aa-d3b193cdc08a" 00:21:08.882 ], 00:21:08.882 "product_name": "Malloc disk", 00:21:08.882 "block_size": 512, 00:21:08.882 "num_blocks": 65536, 00:21:08.882 "uuid": "2e72a996-54b4-4720-b3aa-d3b193cdc08a", 00:21:08.882 "assigned_rate_limits": { 00:21:08.882 "rw_ios_per_sec": 0, 00:21:08.882 "rw_mbytes_per_sec": 0, 00:21:08.882 "r_mbytes_per_sec": 0, 00:21:08.882 "w_mbytes_per_sec": 0 00:21:08.882 }, 00:21:08.882 "claimed": true, 00:21:08.882 "claim_type": "exclusive_write", 00:21:08.882 "zoned": false, 00:21:08.882 "supported_io_types": { 00:21:08.882 "read": true, 00:21:08.882 "write": true, 00:21:08.882 "unmap": true, 00:21:08.882 "flush": true, 00:21:08.882 "reset": true, 00:21:08.882 "nvme_admin": false, 00:21:08.882 "nvme_io": false, 00:21:08.882 "nvme_io_md": false, 00:21:08.882 "write_zeroes": true, 00:21:08.882 "zcopy": true, 00:21:08.882 "get_zone_info": false, 00:21:08.882 "zone_management": false, 00:21:08.882 "zone_append": false, 00:21:08.882 "compare": false, 00:21:08.882 "compare_and_write": false, 00:21:08.882 "abort": true, 00:21:08.882 "seek_hole": false, 00:21:08.882 "seek_data": false, 00:21:08.882 "copy": true, 00:21:08.882 "nvme_iov_md": false 00:21:08.882 }, 00:21:08.882 "memory_domains": [ 00:21:08.882 { 00:21:08.882 "dma_device_id": "system", 00:21:08.882 "dma_device_type": 1 00:21:08.882 }, 00:21:08.882 { 00:21:08.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:08.882 "dma_device_type": 2 00:21:08.882 } 00:21:08.882 ], 00:21:08.882 "driver_specific": {} 00:21:08.882 }' 00:21:08.882 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:08.882 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:09.139 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:09.139 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.139 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:09.139 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:09.139 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.139 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:09.139 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:09.139 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.397 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:09.397 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:09.397 11:32:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:09.655 [2024-07-13 11:32:44.190095] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:09.655 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:09.655 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:21:09.655 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:09.655 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:21:09.655 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:21:09.655 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.656 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.914 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.914 "name": "Existed_Raid", 00:21:09.914 "uuid": "846db8c5-4738-4217-b801-ca2d445c0985", 00:21:09.914 "strip_size_kb": 0, 00:21:09.914 "state": "online", 00:21:09.914 "raid_level": "raid1", 00:21:09.914 "superblock": true, 00:21:09.914 "num_base_bdevs": 3, 00:21:09.914 "num_base_bdevs_discovered": 2, 00:21:09.914 "num_base_bdevs_operational": 2, 00:21:09.914 "base_bdevs_list": [ 00:21:09.914 { 00:21:09.914 "name": null, 00:21:09.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.914 "is_configured": false, 00:21:09.914 "data_offset": 2048, 00:21:09.914 "data_size": 63488 00:21:09.914 }, 00:21:09.914 { 00:21:09.914 "name": "BaseBdev2", 00:21:09.914 "uuid": "af2415e1-4b8c-43e8-8295-384db9aa3afe", 00:21:09.914 "is_configured": true, 00:21:09.914 "data_offset": 2048, 00:21:09.914 "data_size": 63488 00:21:09.914 }, 00:21:09.914 { 00:21:09.914 "name": "BaseBdev3", 00:21:09.914 "uuid": "2e72a996-54b4-4720-b3aa-d3b193cdc08a", 00:21:09.914 "is_configured": true, 00:21:09.914 "data_offset": 2048, 00:21:09.914 "data_size": 63488 00:21:09.914 } 00:21:09.914 ] 00:21:09.914 }' 00:21:09.914 11:32:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.914 11:32:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.482 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:10.482 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:10.482 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.482 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:10.740 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:10.740 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:10.740 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:10.998 [2024-07-13 11:32:45.682551] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:11.257 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:11.257 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:11.257 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.257 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:11.257 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:11.257 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:11.257 11:32:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:11.515 [2024-07-13 11:32:46.169712] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:11.515 [2024-07-13 11:32:46.169836] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:11.515 [2024-07-13 11:32:46.233480] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:11.515 [2024-07-13 11:32:46.233543] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:11.515 [2024-07-13 11:32:46.233553] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:21:11.515 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:11.515 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:11.515 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.515 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:11.773 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:11.773 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:11.773 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:21:11.773 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:11.773 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:11.773 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:12.032 BaseBdev2 00:21:12.032 11:32:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:12.032 11:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:12.032 11:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:12.032 11:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:12.032 11:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:12.032 11:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:12.032 11:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:12.291 11:32:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:12.549 [ 00:21:12.549 { 00:21:12.549 "name": "BaseBdev2", 00:21:12.549 "aliases": [ 00:21:12.549 "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f" 00:21:12.549 ], 00:21:12.549 "product_name": "Malloc disk", 00:21:12.549 "block_size": 512, 00:21:12.549 "num_blocks": 65536, 00:21:12.549 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:12.549 "assigned_rate_limits": { 00:21:12.549 "rw_ios_per_sec": 0, 00:21:12.549 "rw_mbytes_per_sec": 0, 00:21:12.549 "r_mbytes_per_sec": 0, 00:21:12.549 "w_mbytes_per_sec": 0 00:21:12.549 }, 00:21:12.549 "claimed": false, 00:21:12.549 "zoned": false, 00:21:12.549 "supported_io_types": { 00:21:12.549 "read": true, 00:21:12.549 "write": true, 00:21:12.549 "unmap": true, 00:21:12.549 "flush": true, 00:21:12.549 "reset": true, 00:21:12.549 "nvme_admin": false, 00:21:12.549 "nvme_io": false, 00:21:12.549 "nvme_io_md": false, 00:21:12.549 "write_zeroes": true, 00:21:12.549 "zcopy": true, 00:21:12.549 "get_zone_info": false, 00:21:12.549 "zone_management": false, 00:21:12.549 "zone_append": false, 00:21:12.549 "compare": false, 00:21:12.549 "compare_and_write": false, 00:21:12.549 "abort": true, 00:21:12.549 "seek_hole": false, 00:21:12.549 "seek_data": false, 00:21:12.549 "copy": true, 00:21:12.549 "nvme_iov_md": false 00:21:12.549 }, 00:21:12.549 "memory_domains": [ 00:21:12.549 { 00:21:12.549 "dma_device_id": "system", 00:21:12.549 "dma_device_type": 1 00:21:12.549 }, 00:21:12.549 { 00:21:12.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.549 "dma_device_type": 2 00:21:12.549 } 00:21:12.549 ], 00:21:12.549 "driver_specific": {} 00:21:12.549 } 00:21:12.549 ] 00:21:12.549 11:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:12.549 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:12.549 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:12.549 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:12.807 BaseBdev3 00:21:12.807 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:12.807 11:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:12.807 11:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:12.807 11:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:12.807 11:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:12.807 11:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:12.807 11:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:13.064 11:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:13.064 [ 00:21:13.064 { 00:21:13.064 "name": "BaseBdev3", 00:21:13.064 "aliases": [ 00:21:13.064 "ca94aa8c-22b3-4ded-b2fc-8481760ca406" 00:21:13.064 ], 00:21:13.064 "product_name": "Malloc disk", 00:21:13.064 "block_size": 512, 00:21:13.064 "num_blocks": 65536, 00:21:13.064 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:13.064 "assigned_rate_limits": { 00:21:13.064 "rw_ios_per_sec": 0, 00:21:13.064 "rw_mbytes_per_sec": 0, 00:21:13.064 "r_mbytes_per_sec": 0, 00:21:13.064 "w_mbytes_per_sec": 0 00:21:13.064 }, 00:21:13.064 "claimed": false, 00:21:13.064 "zoned": false, 00:21:13.064 "supported_io_types": { 00:21:13.064 "read": true, 00:21:13.064 "write": true, 00:21:13.064 "unmap": true, 00:21:13.064 "flush": true, 00:21:13.064 "reset": true, 00:21:13.064 "nvme_admin": false, 00:21:13.064 "nvme_io": false, 00:21:13.064 "nvme_io_md": false, 00:21:13.064 "write_zeroes": true, 00:21:13.064 "zcopy": true, 00:21:13.064 "get_zone_info": false, 00:21:13.064 "zone_management": false, 00:21:13.064 "zone_append": false, 00:21:13.064 "compare": false, 00:21:13.064 "compare_and_write": false, 00:21:13.064 "abort": true, 00:21:13.064 "seek_hole": false, 00:21:13.064 "seek_data": false, 00:21:13.064 "copy": true, 00:21:13.064 "nvme_iov_md": false 00:21:13.064 }, 00:21:13.064 "memory_domains": [ 00:21:13.064 { 00:21:13.064 "dma_device_id": "system", 00:21:13.064 "dma_device_type": 1 00:21:13.064 }, 00:21:13.064 { 00:21:13.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.064 "dma_device_type": 2 00:21:13.064 } 00:21:13.064 ], 00:21:13.064 "driver_specific": {} 00:21:13.064 } 00:21:13.064 ] 00:21:13.064 11:32:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:13.064 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:13.064 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:13.064 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:13.322 [2024-07-13 11:32:47.955379] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:13.322 [2024-07-13 11:32:47.955437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:13.322 [2024-07-13 11:32:47.955462] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:13.322 [2024-07-13 11:32:47.957353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.322 11:32:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:13.579 11:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.579 "name": "Existed_Raid", 00:21:13.579 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:13.579 "strip_size_kb": 0, 00:21:13.579 "state": "configuring", 00:21:13.579 "raid_level": "raid1", 00:21:13.579 "superblock": true, 00:21:13.579 "num_base_bdevs": 3, 00:21:13.579 "num_base_bdevs_discovered": 2, 00:21:13.579 "num_base_bdevs_operational": 3, 00:21:13.579 "base_bdevs_list": [ 00:21:13.579 { 00:21:13.579 "name": "BaseBdev1", 00:21:13.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.579 "is_configured": false, 00:21:13.579 "data_offset": 0, 00:21:13.579 "data_size": 0 00:21:13.579 }, 00:21:13.579 { 00:21:13.579 "name": "BaseBdev2", 00:21:13.579 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:13.579 "is_configured": true, 00:21:13.580 "data_offset": 2048, 00:21:13.580 "data_size": 63488 00:21:13.580 }, 00:21:13.580 { 00:21:13.580 "name": "BaseBdev3", 00:21:13.580 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:13.580 "is_configured": true, 00:21:13.580 "data_offset": 2048, 00:21:13.580 "data_size": 63488 00:21:13.580 } 00:21:13.580 ] 00:21:13.580 }' 00:21:13.580 11:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.580 11:32:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:14.147 11:32:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:14.405 [2024-07-13 11:32:49.055559] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:14.405 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:14.664 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.664 "name": "Existed_Raid", 00:21:14.664 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:14.664 "strip_size_kb": 0, 00:21:14.664 "state": "configuring", 00:21:14.664 "raid_level": "raid1", 00:21:14.664 "superblock": true, 00:21:14.664 "num_base_bdevs": 3, 00:21:14.664 "num_base_bdevs_discovered": 1, 00:21:14.664 "num_base_bdevs_operational": 3, 00:21:14.664 "base_bdevs_list": [ 00:21:14.664 { 00:21:14.664 "name": "BaseBdev1", 00:21:14.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.664 "is_configured": false, 00:21:14.664 "data_offset": 0, 00:21:14.664 "data_size": 0 00:21:14.664 }, 00:21:14.664 { 00:21:14.664 "name": null, 00:21:14.664 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:14.664 "is_configured": false, 00:21:14.664 "data_offset": 2048, 00:21:14.664 "data_size": 63488 00:21:14.664 }, 00:21:14.664 { 00:21:14.664 "name": "BaseBdev3", 00:21:14.664 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:14.664 "is_configured": true, 00:21:14.664 "data_offset": 2048, 00:21:14.664 "data_size": 63488 00:21:14.664 } 00:21:14.664 ] 00:21:14.664 }' 00:21:14.664 11:32:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.664 11:32:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.600 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.600 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:15.600 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:15.600 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:15.858 [2024-07-13 11:32:50.525308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:15.858 BaseBdev1 00:21:15.858 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:15.858 11:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:15.858 11:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:15.858 11:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:15.858 11:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:15.858 11:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:15.858 11:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:16.119 11:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:16.402 [ 00:21:16.402 { 00:21:16.402 "name": "BaseBdev1", 00:21:16.402 "aliases": [ 00:21:16.402 "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9" 00:21:16.402 ], 00:21:16.402 "product_name": "Malloc disk", 00:21:16.402 "block_size": 512, 00:21:16.402 "num_blocks": 65536, 00:21:16.402 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:16.402 "assigned_rate_limits": { 00:21:16.402 "rw_ios_per_sec": 0, 00:21:16.402 "rw_mbytes_per_sec": 0, 00:21:16.402 "r_mbytes_per_sec": 0, 00:21:16.402 "w_mbytes_per_sec": 0 00:21:16.402 }, 00:21:16.402 "claimed": true, 00:21:16.402 "claim_type": "exclusive_write", 00:21:16.402 "zoned": false, 00:21:16.402 "supported_io_types": { 00:21:16.402 "read": true, 00:21:16.402 "write": true, 00:21:16.402 "unmap": true, 00:21:16.402 "flush": true, 00:21:16.402 "reset": true, 00:21:16.402 "nvme_admin": false, 00:21:16.402 "nvme_io": false, 00:21:16.402 "nvme_io_md": false, 00:21:16.402 "write_zeroes": true, 00:21:16.402 "zcopy": true, 00:21:16.402 "get_zone_info": false, 00:21:16.402 "zone_management": false, 00:21:16.402 "zone_append": false, 00:21:16.402 "compare": false, 00:21:16.402 "compare_and_write": false, 00:21:16.402 "abort": true, 00:21:16.402 "seek_hole": false, 00:21:16.402 "seek_data": false, 00:21:16.402 "copy": true, 00:21:16.402 "nvme_iov_md": false 00:21:16.402 }, 00:21:16.402 "memory_domains": [ 00:21:16.402 { 00:21:16.402 "dma_device_id": "system", 00:21:16.402 "dma_device_type": 1 00:21:16.402 }, 00:21:16.402 { 00:21:16.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:16.402 "dma_device_type": 2 00:21:16.402 } 00:21:16.402 ], 00:21:16.402 "driver_specific": {} 00:21:16.402 } 00:21:16.402 ] 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.402 11:32:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:16.402 11:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.402 "name": "Existed_Raid", 00:21:16.402 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:16.402 "strip_size_kb": 0, 00:21:16.402 "state": "configuring", 00:21:16.402 "raid_level": "raid1", 00:21:16.402 "superblock": true, 00:21:16.402 "num_base_bdevs": 3, 00:21:16.402 "num_base_bdevs_discovered": 2, 00:21:16.402 "num_base_bdevs_operational": 3, 00:21:16.402 "base_bdevs_list": [ 00:21:16.402 { 00:21:16.402 "name": "BaseBdev1", 00:21:16.402 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:16.402 "is_configured": true, 00:21:16.402 "data_offset": 2048, 00:21:16.402 "data_size": 63488 00:21:16.402 }, 00:21:16.402 { 00:21:16.402 "name": null, 00:21:16.402 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:16.402 "is_configured": false, 00:21:16.402 "data_offset": 2048, 00:21:16.402 "data_size": 63488 00:21:16.402 }, 00:21:16.402 { 00:21:16.402 "name": "BaseBdev3", 00:21:16.402 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:16.402 "is_configured": true, 00:21:16.402 "data_offset": 2048, 00:21:16.402 "data_size": 63488 00:21:16.402 } 00:21:16.402 ] 00:21:16.402 }' 00:21:16.402 11:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.402 11:32:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.347 11:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.347 11:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:17.347 11:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:17.347 11:32:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:17.606 [2024-07-13 11:32:52.185635] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.606 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:17.864 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.864 "name": "Existed_Raid", 00:21:17.864 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:17.864 "strip_size_kb": 0, 00:21:17.864 "state": "configuring", 00:21:17.864 "raid_level": "raid1", 00:21:17.864 "superblock": true, 00:21:17.864 "num_base_bdevs": 3, 00:21:17.864 "num_base_bdevs_discovered": 1, 00:21:17.864 "num_base_bdevs_operational": 3, 00:21:17.864 "base_bdevs_list": [ 00:21:17.864 { 00:21:17.864 "name": "BaseBdev1", 00:21:17.864 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:17.864 "is_configured": true, 00:21:17.864 "data_offset": 2048, 00:21:17.864 "data_size": 63488 00:21:17.864 }, 00:21:17.864 { 00:21:17.864 "name": null, 00:21:17.864 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:17.864 "is_configured": false, 00:21:17.864 "data_offset": 2048, 00:21:17.864 "data_size": 63488 00:21:17.864 }, 00:21:17.864 { 00:21:17.864 "name": null, 00:21:17.864 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:17.864 "is_configured": false, 00:21:17.864 "data_offset": 2048, 00:21:17.864 "data_size": 63488 00:21:17.864 } 00:21:17.864 ] 00:21:17.864 }' 00:21:17.864 11:32:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.864 11:32:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.431 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:18.431 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.689 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:18.689 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:18.947 [2024-07-13 11:32:53.501878] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.947 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:19.204 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.204 "name": "Existed_Raid", 00:21:19.204 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:19.204 "strip_size_kb": 0, 00:21:19.204 "state": "configuring", 00:21:19.204 "raid_level": "raid1", 00:21:19.204 "superblock": true, 00:21:19.204 "num_base_bdevs": 3, 00:21:19.204 "num_base_bdevs_discovered": 2, 00:21:19.204 "num_base_bdevs_operational": 3, 00:21:19.204 "base_bdevs_list": [ 00:21:19.204 { 00:21:19.204 "name": "BaseBdev1", 00:21:19.204 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:19.204 "is_configured": true, 00:21:19.204 "data_offset": 2048, 00:21:19.204 "data_size": 63488 00:21:19.204 }, 00:21:19.204 { 00:21:19.204 "name": null, 00:21:19.204 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:19.204 "is_configured": false, 00:21:19.204 "data_offset": 2048, 00:21:19.204 "data_size": 63488 00:21:19.204 }, 00:21:19.204 { 00:21:19.204 "name": "BaseBdev3", 00:21:19.204 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:19.204 "is_configured": true, 00:21:19.204 "data_offset": 2048, 00:21:19.204 "data_size": 63488 00:21:19.204 } 00:21:19.204 ] 00:21:19.204 }' 00:21:19.204 11:32:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.204 11:32:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.768 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.768 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:20.026 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:20.026 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:20.284 [2024-07-13 11:32:54.842134] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.284 11:32:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.542 11:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.542 "name": "Existed_Raid", 00:21:20.542 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:20.542 "strip_size_kb": 0, 00:21:20.542 "state": "configuring", 00:21:20.542 "raid_level": "raid1", 00:21:20.542 "superblock": true, 00:21:20.542 "num_base_bdevs": 3, 00:21:20.542 "num_base_bdevs_discovered": 1, 00:21:20.542 "num_base_bdevs_operational": 3, 00:21:20.542 "base_bdevs_list": [ 00:21:20.542 { 00:21:20.542 "name": null, 00:21:20.542 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:20.542 "is_configured": false, 00:21:20.542 "data_offset": 2048, 00:21:20.542 "data_size": 63488 00:21:20.542 }, 00:21:20.542 { 00:21:20.542 "name": null, 00:21:20.542 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:20.542 "is_configured": false, 00:21:20.542 "data_offset": 2048, 00:21:20.542 "data_size": 63488 00:21:20.542 }, 00:21:20.542 { 00:21:20.542 "name": "BaseBdev3", 00:21:20.542 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:20.542 "is_configured": true, 00:21:20.542 "data_offset": 2048, 00:21:20.542 "data_size": 63488 00:21:20.542 } 00:21:20.542 ] 00:21:20.542 }' 00:21:20.542 11:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.542 11:32:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.107 11:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.107 11:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:21.365 11:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:21.365 11:32:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:21.623 [2024-07-13 11:32:56.217029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.623 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.881 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.881 "name": "Existed_Raid", 00:21:21.881 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:21.881 "strip_size_kb": 0, 00:21:21.881 "state": "configuring", 00:21:21.881 "raid_level": "raid1", 00:21:21.881 "superblock": true, 00:21:21.881 "num_base_bdevs": 3, 00:21:21.881 "num_base_bdevs_discovered": 2, 00:21:21.881 "num_base_bdevs_operational": 3, 00:21:21.881 "base_bdevs_list": [ 00:21:21.881 { 00:21:21.881 "name": null, 00:21:21.881 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:21.881 "is_configured": false, 00:21:21.881 "data_offset": 2048, 00:21:21.881 "data_size": 63488 00:21:21.881 }, 00:21:21.881 { 00:21:21.881 "name": "BaseBdev2", 00:21:21.881 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:21.881 "is_configured": true, 00:21:21.881 "data_offset": 2048, 00:21:21.881 "data_size": 63488 00:21:21.881 }, 00:21:21.881 { 00:21:21.881 "name": "BaseBdev3", 00:21:21.881 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:21.881 "is_configured": true, 00:21:21.881 "data_offset": 2048, 00:21:21.881 "data_size": 63488 00:21:21.881 } 00:21:21.881 ] 00:21:21.881 }' 00:21:21.881 11:32:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.881 11:32:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:22.448 11:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.448 11:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:22.706 11:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:22.706 11:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.706 11:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:22.965 11:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9 00:21:23.224 [2024-07-13 11:32:57.769857] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:23.224 [2024-07-13 11:32:57.770068] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:23.224 [2024-07-13 11:32:57.770082] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:23.224 [2024-07-13 11:32:57.770239] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:23.224 NewBaseBdev 00:21:23.224 [2024-07-13 11:32:57.770567] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:23.224 [2024-07-13 11:32:57.770582] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:21:23.224 [2024-07-13 11:32:57.770712] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.224 11:32:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:23.224 11:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:21:23.224 11:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:23.224 11:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:21:23.224 11:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:23.224 11:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:23.224 11:32:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:23.482 11:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:23.740 [ 00:21:23.740 { 00:21:23.740 "name": "NewBaseBdev", 00:21:23.740 "aliases": [ 00:21:23.740 "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9" 00:21:23.740 ], 00:21:23.740 "product_name": "Malloc disk", 00:21:23.740 "block_size": 512, 00:21:23.740 "num_blocks": 65536, 00:21:23.740 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:23.740 "assigned_rate_limits": { 00:21:23.740 "rw_ios_per_sec": 0, 00:21:23.740 "rw_mbytes_per_sec": 0, 00:21:23.740 "r_mbytes_per_sec": 0, 00:21:23.740 "w_mbytes_per_sec": 0 00:21:23.740 }, 00:21:23.740 "claimed": true, 00:21:23.740 "claim_type": "exclusive_write", 00:21:23.740 "zoned": false, 00:21:23.740 "supported_io_types": { 00:21:23.740 "read": true, 00:21:23.740 "write": true, 00:21:23.740 "unmap": true, 00:21:23.740 "flush": true, 00:21:23.740 "reset": true, 00:21:23.740 "nvme_admin": false, 00:21:23.740 "nvme_io": false, 00:21:23.740 "nvme_io_md": false, 00:21:23.740 "write_zeroes": true, 00:21:23.740 "zcopy": true, 00:21:23.740 "get_zone_info": false, 00:21:23.740 "zone_management": false, 00:21:23.740 "zone_append": false, 00:21:23.740 "compare": false, 00:21:23.740 "compare_and_write": false, 00:21:23.740 "abort": true, 00:21:23.740 "seek_hole": false, 00:21:23.740 "seek_data": false, 00:21:23.740 "copy": true, 00:21:23.740 "nvme_iov_md": false 00:21:23.740 }, 00:21:23.740 "memory_domains": [ 00:21:23.740 { 00:21:23.740 "dma_device_id": "system", 00:21:23.740 "dma_device_type": 1 00:21:23.740 }, 00:21:23.740 { 00:21:23.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.740 "dma_device_type": 2 00:21:23.740 } 00:21:23.740 ], 00:21:23.740 "driver_specific": {} 00:21:23.740 } 00:21:23.740 ] 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.740 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.998 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.998 "name": "Existed_Raid", 00:21:23.998 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:23.998 "strip_size_kb": 0, 00:21:23.998 "state": "online", 00:21:23.998 "raid_level": "raid1", 00:21:23.998 "superblock": true, 00:21:23.998 "num_base_bdevs": 3, 00:21:23.998 "num_base_bdevs_discovered": 3, 00:21:23.998 "num_base_bdevs_operational": 3, 00:21:23.998 "base_bdevs_list": [ 00:21:23.998 { 00:21:23.998 "name": "NewBaseBdev", 00:21:23.998 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:23.998 "is_configured": true, 00:21:23.998 "data_offset": 2048, 00:21:23.998 "data_size": 63488 00:21:23.998 }, 00:21:23.998 { 00:21:23.998 "name": "BaseBdev2", 00:21:23.998 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:23.998 "is_configured": true, 00:21:23.998 "data_offset": 2048, 00:21:23.998 "data_size": 63488 00:21:23.998 }, 00:21:23.998 { 00:21:23.998 "name": "BaseBdev3", 00:21:23.998 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:23.998 "is_configured": true, 00:21:23.998 "data_offset": 2048, 00:21:23.998 "data_size": 63488 00:21:23.998 } 00:21:23.998 ] 00:21:23.998 }' 00:21:23.998 11:32:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.998 11:32:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.565 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:24.565 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:24.565 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:24.565 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:24.565 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:24.565 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:24.565 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:24.565 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:24.823 [2024-07-13 11:32:59.406454] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.824 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:24.824 "name": "Existed_Raid", 00:21:24.824 "aliases": [ 00:21:24.824 "fe0601ee-4899-4df0-9ce8-e4d858242dbe" 00:21:24.824 ], 00:21:24.824 "product_name": "Raid Volume", 00:21:24.824 "block_size": 512, 00:21:24.824 "num_blocks": 63488, 00:21:24.824 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:24.824 "assigned_rate_limits": { 00:21:24.824 "rw_ios_per_sec": 0, 00:21:24.824 "rw_mbytes_per_sec": 0, 00:21:24.824 "r_mbytes_per_sec": 0, 00:21:24.824 "w_mbytes_per_sec": 0 00:21:24.824 }, 00:21:24.824 "claimed": false, 00:21:24.824 "zoned": false, 00:21:24.824 "supported_io_types": { 00:21:24.824 "read": true, 00:21:24.824 "write": true, 00:21:24.824 "unmap": false, 00:21:24.824 "flush": false, 00:21:24.824 "reset": true, 00:21:24.824 "nvme_admin": false, 00:21:24.824 "nvme_io": false, 00:21:24.824 "nvme_io_md": false, 00:21:24.824 "write_zeroes": true, 00:21:24.824 "zcopy": false, 00:21:24.824 "get_zone_info": false, 00:21:24.824 "zone_management": false, 00:21:24.824 "zone_append": false, 00:21:24.824 "compare": false, 00:21:24.824 "compare_and_write": false, 00:21:24.824 "abort": false, 00:21:24.824 "seek_hole": false, 00:21:24.824 "seek_data": false, 00:21:24.824 "copy": false, 00:21:24.824 "nvme_iov_md": false 00:21:24.824 }, 00:21:24.824 "memory_domains": [ 00:21:24.824 { 00:21:24.824 "dma_device_id": "system", 00:21:24.824 "dma_device_type": 1 00:21:24.824 }, 00:21:24.824 { 00:21:24.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.824 "dma_device_type": 2 00:21:24.824 }, 00:21:24.824 { 00:21:24.824 "dma_device_id": "system", 00:21:24.824 "dma_device_type": 1 00:21:24.824 }, 00:21:24.824 { 00:21:24.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.824 "dma_device_type": 2 00:21:24.824 }, 00:21:24.824 { 00:21:24.824 "dma_device_id": "system", 00:21:24.824 "dma_device_type": 1 00:21:24.824 }, 00:21:24.824 { 00:21:24.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:24.824 "dma_device_type": 2 00:21:24.824 } 00:21:24.824 ], 00:21:24.824 "driver_specific": { 00:21:24.824 "raid": { 00:21:24.824 "uuid": "fe0601ee-4899-4df0-9ce8-e4d858242dbe", 00:21:24.824 "strip_size_kb": 0, 00:21:24.824 "state": "online", 00:21:24.824 "raid_level": "raid1", 00:21:24.824 "superblock": true, 00:21:24.824 "num_base_bdevs": 3, 00:21:24.824 "num_base_bdevs_discovered": 3, 00:21:24.824 "num_base_bdevs_operational": 3, 00:21:24.824 "base_bdevs_list": [ 00:21:24.824 { 00:21:24.824 "name": "NewBaseBdev", 00:21:24.824 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:24.824 "is_configured": true, 00:21:24.824 "data_offset": 2048, 00:21:24.824 "data_size": 63488 00:21:24.824 }, 00:21:24.824 { 00:21:24.824 "name": "BaseBdev2", 00:21:24.824 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:24.824 "is_configured": true, 00:21:24.824 "data_offset": 2048, 00:21:24.824 "data_size": 63488 00:21:24.824 }, 00:21:24.824 { 00:21:24.824 "name": "BaseBdev3", 00:21:24.824 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:24.824 "is_configured": true, 00:21:24.824 "data_offset": 2048, 00:21:24.824 "data_size": 63488 00:21:24.824 } 00:21:24.824 ] 00:21:24.824 } 00:21:24.824 } 00:21:24.824 }' 00:21:24.824 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:24.824 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:24.824 BaseBdev2 00:21:24.824 BaseBdev3' 00:21:24.824 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:24.824 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:24.824 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:25.083 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:25.083 "name": "NewBaseBdev", 00:21:25.083 "aliases": [ 00:21:25.083 "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9" 00:21:25.083 ], 00:21:25.083 "product_name": "Malloc disk", 00:21:25.083 "block_size": 512, 00:21:25.083 "num_blocks": 65536, 00:21:25.083 "uuid": "4ebe18ed-0aa9-44ad-b07d-c72a29ffa4e9", 00:21:25.083 "assigned_rate_limits": { 00:21:25.083 "rw_ios_per_sec": 0, 00:21:25.083 "rw_mbytes_per_sec": 0, 00:21:25.083 "r_mbytes_per_sec": 0, 00:21:25.083 "w_mbytes_per_sec": 0 00:21:25.083 }, 00:21:25.083 "claimed": true, 00:21:25.083 "claim_type": "exclusive_write", 00:21:25.083 "zoned": false, 00:21:25.083 "supported_io_types": { 00:21:25.083 "read": true, 00:21:25.083 "write": true, 00:21:25.083 "unmap": true, 00:21:25.083 "flush": true, 00:21:25.083 "reset": true, 00:21:25.083 "nvme_admin": false, 00:21:25.083 "nvme_io": false, 00:21:25.083 "nvme_io_md": false, 00:21:25.083 "write_zeroes": true, 00:21:25.083 "zcopy": true, 00:21:25.083 "get_zone_info": false, 00:21:25.083 "zone_management": false, 00:21:25.083 "zone_append": false, 00:21:25.083 "compare": false, 00:21:25.083 "compare_and_write": false, 00:21:25.083 "abort": true, 00:21:25.083 "seek_hole": false, 00:21:25.083 "seek_data": false, 00:21:25.083 "copy": true, 00:21:25.083 "nvme_iov_md": false 00:21:25.083 }, 00:21:25.083 "memory_domains": [ 00:21:25.083 { 00:21:25.083 "dma_device_id": "system", 00:21:25.083 "dma_device_type": 1 00:21:25.083 }, 00:21:25.083 { 00:21:25.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.083 "dma_device_type": 2 00:21:25.083 } 00:21:25.083 ], 00:21:25.083 "driver_specific": {} 00:21:25.083 }' 00:21:25.083 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.083 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.083 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:25.083 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.342 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.342 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:25.342 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.342 11:32:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:25.342 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:25.342 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.342 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:25.601 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:25.601 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:25.601 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:25.601 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:25.860 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:25.860 "name": "BaseBdev2", 00:21:25.860 "aliases": [ 00:21:25.860 "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f" 00:21:25.860 ], 00:21:25.860 "product_name": "Malloc disk", 00:21:25.860 "block_size": 512, 00:21:25.860 "num_blocks": 65536, 00:21:25.860 "uuid": "0f73507d-ebfb-4efd-84d6-31b79b4a0f7f", 00:21:25.860 "assigned_rate_limits": { 00:21:25.860 "rw_ios_per_sec": 0, 00:21:25.860 "rw_mbytes_per_sec": 0, 00:21:25.860 "r_mbytes_per_sec": 0, 00:21:25.860 "w_mbytes_per_sec": 0 00:21:25.860 }, 00:21:25.860 "claimed": true, 00:21:25.860 "claim_type": "exclusive_write", 00:21:25.860 "zoned": false, 00:21:25.860 "supported_io_types": { 00:21:25.860 "read": true, 00:21:25.860 "write": true, 00:21:25.860 "unmap": true, 00:21:25.860 "flush": true, 00:21:25.860 "reset": true, 00:21:25.860 "nvme_admin": false, 00:21:25.860 "nvme_io": false, 00:21:25.860 "nvme_io_md": false, 00:21:25.860 "write_zeroes": true, 00:21:25.860 "zcopy": true, 00:21:25.860 "get_zone_info": false, 00:21:25.860 "zone_management": false, 00:21:25.860 "zone_append": false, 00:21:25.860 "compare": false, 00:21:25.860 "compare_and_write": false, 00:21:25.860 "abort": true, 00:21:25.860 "seek_hole": false, 00:21:25.860 "seek_data": false, 00:21:25.860 "copy": true, 00:21:25.860 "nvme_iov_md": false 00:21:25.860 }, 00:21:25.860 "memory_domains": [ 00:21:25.860 { 00:21:25.860 "dma_device_id": "system", 00:21:25.860 "dma_device_type": 1 00:21:25.860 }, 00:21:25.860 { 00:21:25.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:25.860 "dma_device_type": 2 00:21:25.860 } 00:21:25.860 ], 00:21:25.860 "driver_specific": {} 00:21:25.860 }' 00:21:25.860 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.860 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:25.860 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:25.860 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:25.860 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:26.118 11:33:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:26.378 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:26.378 "name": "BaseBdev3", 00:21:26.378 "aliases": [ 00:21:26.378 "ca94aa8c-22b3-4ded-b2fc-8481760ca406" 00:21:26.378 ], 00:21:26.378 "product_name": "Malloc disk", 00:21:26.378 "block_size": 512, 00:21:26.378 "num_blocks": 65536, 00:21:26.378 "uuid": "ca94aa8c-22b3-4ded-b2fc-8481760ca406", 00:21:26.378 "assigned_rate_limits": { 00:21:26.378 "rw_ios_per_sec": 0, 00:21:26.378 "rw_mbytes_per_sec": 0, 00:21:26.378 "r_mbytes_per_sec": 0, 00:21:26.378 "w_mbytes_per_sec": 0 00:21:26.378 }, 00:21:26.378 "claimed": true, 00:21:26.378 "claim_type": "exclusive_write", 00:21:26.378 "zoned": false, 00:21:26.378 "supported_io_types": { 00:21:26.378 "read": true, 00:21:26.378 "write": true, 00:21:26.378 "unmap": true, 00:21:26.378 "flush": true, 00:21:26.378 "reset": true, 00:21:26.378 "nvme_admin": false, 00:21:26.378 "nvme_io": false, 00:21:26.378 "nvme_io_md": false, 00:21:26.378 "write_zeroes": true, 00:21:26.378 "zcopy": true, 00:21:26.378 "get_zone_info": false, 00:21:26.378 "zone_management": false, 00:21:26.378 "zone_append": false, 00:21:26.378 "compare": false, 00:21:26.378 "compare_and_write": false, 00:21:26.378 "abort": true, 00:21:26.378 "seek_hole": false, 00:21:26.378 "seek_data": false, 00:21:26.378 "copy": true, 00:21:26.378 "nvme_iov_md": false 00:21:26.378 }, 00:21:26.378 "memory_domains": [ 00:21:26.378 { 00:21:26.378 "dma_device_id": "system", 00:21:26.378 "dma_device_type": 1 00:21:26.378 }, 00:21:26.378 { 00:21:26.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:26.378 "dma_device_type": 2 00:21:26.378 } 00:21:26.378 ], 00:21:26.378 "driver_specific": {} 00:21:26.378 }' 00:21:26.378 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.378 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:26.637 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:26.637 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.637 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:26.637 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:26.637 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.637 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:26.637 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:26.637 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.895 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:26.895 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:26.896 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:27.154 [2024-07-13 11:33:01.722601] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:27.154 [2024-07-13 11:33:01.722631] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.154 [2024-07-13 11:33:01.722699] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.154 [2024-07-13 11:33:01.723016] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:27.154 [2024-07-13 11:33:01.723036] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 132231 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 132231 ']' 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 132231 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 132231 00:21:27.154 killing process with pid 132231 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 132231' 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 132231 00:21:27.154 11:33:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 132231 00:21:27.154 [2024-07-13 11:33:01.758179] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:27.413 [2024-07-13 11:33:01.946570] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:28.348 ************************************ 00:21:28.348 END TEST raid_state_function_test_sb 00:21:28.348 ************************************ 00:21:28.348 11:33:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:28.348 00:21:28.348 real 0m29.459s 00:21:28.348 user 0m55.836s 00:21:28.348 sys 0m2.984s 00:21:28.348 11:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.348 11:33:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.348 11:33:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:28.348 11:33:02 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:21:28.348 11:33:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:28.348 11:33:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.348 11:33:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:28.348 ************************************ 00:21:28.348 START TEST raid_superblock_test 00:21:28.348 ************************************ 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=133249 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 133249 /var/tmp/spdk-raid.sock 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 133249 ']' 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.348 11:33:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:28.348 [2024-07-13 11:33:03.001545] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:28.348 [2024-07-13 11:33:03.001996] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133249 ] 00:21:28.606 [2024-07-13 11:33:03.171338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.864 [2024-07-13 11:33:03.398720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.864 [2024-07-13 11:33:03.583003] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:29.122 11:33:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:29.380 malloc1 00:21:29.380 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:29.638 [2024-07-13 11:33:04.305156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:29.638 [2024-07-13 11:33:04.305268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.638 [2024-07-13 11:33:04.305301] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:29.638 [2024-07-13 11:33:04.305320] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.638 [2024-07-13 11:33:04.307534] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.638 [2024-07-13 11:33:04.307579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:29.638 pt1 00:21:29.638 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:29.638 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:29.638 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:21:29.638 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:21:29.638 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:29.638 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:29.638 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:29.638 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:29.638 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:29.896 malloc2 00:21:29.896 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:30.154 [2024-07-13 11:33:04.732704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:30.154 [2024-07-13 11:33:04.732798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.154 [2024-07-13 11:33:04.732832] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:21:30.154 [2024-07-13 11:33:04.732850] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.154 [2024-07-13 11:33:04.735012] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.154 [2024-07-13 11:33:04.735057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:30.154 pt2 00:21:30.154 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:30.154 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:30.154 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:30.154 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:30.154 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:30.154 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:30.154 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:30.154 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:30.154 11:33:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:30.412 malloc3 00:21:30.412 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:30.670 [2024-07-13 11:33:05.196931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:30.670 [2024-07-13 11:33:05.197021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.670 [2024-07-13 11:33:05.197052] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:21:30.670 [2024-07-13 11:33:05.197076] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.670 [2024-07-13 11:33:05.199273] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.670 [2024-07-13 11:33:05.199323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:30.670 pt3 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:30.670 [2024-07-13 11:33:05.393013] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:30.670 [2024-07-13 11:33:05.394579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:30.670 [2024-07-13 11:33:05.394662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:30.670 [2024-07-13 11:33:05.394871] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:21:30.670 [2024-07-13 11:33:05.394891] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:30.670 [2024-07-13 11:33:05.395001] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:30.670 [2024-07-13 11:33:05.395334] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:21:30.670 [2024-07-13 11:33:05.395355] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:21:30.670 [2024-07-13 11:33:05.395478] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.670 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.928 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.928 "name": "raid_bdev1", 00:21:30.928 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:30.928 "strip_size_kb": 0, 00:21:30.928 "state": "online", 00:21:30.928 "raid_level": "raid1", 00:21:30.928 "superblock": true, 00:21:30.928 "num_base_bdevs": 3, 00:21:30.929 "num_base_bdevs_discovered": 3, 00:21:30.929 "num_base_bdevs_operational": 3, 00:21:30.929 "base_bdevs_list": [ 00:21:30.929 { 00:21:30.929 "name": "pt1", 00:21:30.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:30.929 "is_configured": true, 00:21:30.929 "data_offset": 2048, 00:21:30.929 "data_size": 63488 00:21:30.929 }, 00:21:30.929 { 00:21:30.929 "name": "pt2", 00:21:30.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:30.929 "is_configured": true, 00:21:30.929 "data_offset": 2048, 00:21:30.929 "data_size": 63488 00:21:30.929 }, 00:21:30.929 { 00:21:30.929 "name": "pt3", 00:21:30.929 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:30.929 "is_configured": true, 00:21:30.929 "data_offset": 2048, 00:21:30.929 "data_size": 63488 00:21:30.929 } 00:21:30.929 ] 00:21:30.929 }' 00:21:30.929 11:33:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.929 11:33:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.494 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:31.494 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:31.494 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:31.494 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:31.494 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:31.494 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:31.494 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:31.495 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:31.753 [2024-07-13 11:33:06.365344] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:31.753 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:31.753 "name": "raid_bdev1", 00:21:31.753 "aliases": [ 00:21:31.753 "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c" 00:21:31.753 ], 00:21:31.753 "product_name": "Raid Volume", 00:21:31.753 "block_size": 512, 00:21:31.753 "num_blocks": 63488, 00:21:31.753 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:31.753 "assigned_rate_limits": { 00:21:31.753 "rw_ios_per_sec": 0, 00:21:31.753 "rw_mbytes_per_sec": 0, 00:21:31.753 "r_mbytes_per_sec": 0, 00:21:31.753 "w_mbytes_per_sec": 0 00:21:31.753 }, 00:21:31.753 "claimed": false, 00:21:31.753 "zoned": false, 00:21:31.753 "supported_io_types": { 00:21:31.753 "read": true, 00:21:31.753 "write": true, 00:21:31.753 "unmap": false, 00:21:31.753 "flush": false, 00:21:31.753 "reset": true, 00:21:31.753 "nvme_admin": false, 00:21:31.753 "nvme_io": false, 00:21:31.753 "nvme_io_md": false, 00:21:31.753 "write_zeroes": true, 00:21:31.753 "zcopy": false, 00:21:31.753 "get_zone_info": false, 00:21:31.753 "zone_management": false, 00:21:31.753 "zone_append": false, 00:21:31.753 "compare": false, 00:21:31.753 "compare_and_write": false, 00:21:31.753 "abort": false, 00:21:31.753 "seek_hole": false, 00:21:31.753 "seek_data": false, 00:21:31.753 "copy": false, 00:21:31.753 "nvme_iov_md": false 00:21:31.753 }, 00:21:31.753 "memory_domains": [ 00:21:31.753 { 00:21:31.753 "dma_device_id": "system", 00:21:31.753 "dma_device_type": 1 00:21:31.753 }, 00:21:31.753 { 00:21:31.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.753 "dma_device_type": 2 00:21:31.753 }, 00:21:31.753 { 00:21:31.753 "dma_device_id": "system", 00:21:31.753 "dma_device_type": 1 00:21:31.753 }, 00:21:31.753 { 00:21:31.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.753 "dma_device_type": 2 00:21:31.753 }, 00:21:31.753 { 00:21:31.753 "dma_device_id": "system", 00:21:31.753 "dma_device_type": 1 00:21:31.753 }, 00:21:31.753 { 00:21:31.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.753 "dma_device_type": 2 00:21:31.753 } 00:21:31.753 ], 00:21:31.753 "driver_specific": { 00:21:31.753 "raid": { 00:21:31.753 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:31.753 "strip_size_kb": 0, 00:21:31.753 "state": "online", 00:21:31.753 "raid_level": "raid1", 00:21:31.753 "superblock": true, 00:21:31.753 "num_base_bdevs": 3, 00:21:31.753 "num_base_bdevs_discovered": 3, 00:21:31.753 "num_base_bdevs_operational": 3, 00:21:31.753 "base_bdevs_list": [ 00:21:31.753 { 00:21:31.753 "name": "pt1", 00:21:31.753 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:31.753 "is_configured": true, 00:21:31.753 "data_offset": 2048, 00:21:31.753 "data_size": 63488 00:21:31.753 }, 00:21:31.753 { 00:21:31.753 "name": "pt2", 00:21:31.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:31.753 "is_configured": true, 00:21:31.753 "data_offset": 2048, 00:21:31.754 "data_size": 63488 00:21:31.754 }, 00:21:31.754 { 00:21:31.754 "name": "pt3", 00:21:31.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:31.754 "is_configured": true, 00:21:31.754 "data_offset": 2048, 00:21:31.754 "data_size": 63488 00:21:31.754 } 00:21:31.754 ] 00:21:31.754 } 00:21:31.754 } 00:21:31.754 }' 00:21:31.754 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:31.754 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:31.754 pt2 00:21:31.754 pt3' 00:21:31.754 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:31.754 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:31.754 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:32.012 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:32.012 "name": "pt1", 00:21:32.012 "aliases": [ 00:21:32.012 "00000000-0000-0000-0000-000000000001" 00:21:32.012 ], 00:21:32.012 "product_name": "passthru", 00:21:32.012 "block_size": 512, 00:21:32.012 "num_blocks": 65536, 00:21:32.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:32.012 "assigned_rate_limits": { 00:21:32.012 "rw_ios_per_sec": 0, 00:21:32.012 "rw_mbytes_per_sec": 0, 00:21:32.012 "r_mbytes_per_sec": 0, 00:21:32.012 "w_mbytes_per_sec": 0 00:21:32.012 }, 00:21:32.012 "claimed": true, 00:21:32.012 "claim_type": "exclusive_write", 00:21:32.012 "zoned": false, 00:21:32.012 "supported_io_types": { 00:21:32.012 "read": true, 00:21:32.012 "write": true, 00:21:32.012 "unmap": true, 00:21:32.012 "flush": true, 00:21:32.012 "reset": true, 00:21:32.012 "nvme_admin": false, 00:21:32.012 "nvme_io": false, 00:21:32.012 "nvme_io_md": false, 00:21:32.012 "write_zeroes": true, 00:21:32.012 "zcopy": true, 00:21:32.012 "get_zone_info": false, 00:21:32.012 "zone_management": false, 00:21:32.012 "zone_append": false, 00:21:32.012 "compare": false, 00:21:32.012 "compare_and_write": false, 00:21:32.012 "abort": true, 00:21:32.012 "seek_hole": false, 00:21:32.012 "seek_data": false, 00:21:32.012 "copy": true, 00:21:32.012 "nvme_iov_md": false 00:21:32.012 }, 00:21:32.012 "memory_domains": [ 00:21:32.012 { 00:21:32.012 "dma_device_id": "system", 00:21:32.012 "dma_device_type": 1 00:21:32.012 }, 00:21:32.012 { 00:21:32.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.012 "dma_device_type": 2 00:21:32.012 } 00:21:32.012 ], 00:21:32.012 "driver_specific": { 00:21:32.012 "passthru": { 00:21:32.012 "name": "pt1", 00:21:32.012 "base_bdev_name": "malloc1" 00:21:32.012 } 00:21:32.012 } 00:21:32.012 }' 00:21:32.012 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.012 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.287 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:32.287 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.287 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.287 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:32.287 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.287 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.287 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:32.287 11:33:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.287 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:32.545 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:32.545 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:32.545 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:32.545 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:32.545 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:32.545 "name": "pt2", 00:21:32.545 "aliases": [ 00:21:32.545 "00000000-0000-0000-0000-000000000002" 00:21:32.545 ], 00:21:32.545 "product_name": "passthru", 00:21:32.545 "block_size": 512, 00:21:32.545 "num_blocks": 65536, 00:21:32.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:32.545 "assigned_rate_limits": { 00:21:32.545 "rw_ios_per_sec": 0, 00:21:32.545 "rw_mbytes_per_sec": 0, 00:21:32.545 "r_mbytes_per_sec": 0, 00:21:32.545 "w_mbytes_per_sec": 0 00:21:32.545 }, 00:21:32.545 "claimed": true, 00:21:32.545 "claim_type": "exclusive_write", 00:21:32.545 "zoned": false, 00:21:32.545 "supported_io_types": { 00:21:32.545 "read": true, 00:21:32.545 "write": true, 00:21:32.545 "unmap": true, 00:21:32.545 "flush": true, 00:21:32.545 "reset": true, 00:21:32.545 "nvme_admin": false, 00:21:32.545 "nvme_io": false, 00:21:32.545 "nvme_io_md": false, 00:21:32.545 "write_zeroes": true, 00:21:32.545 "zcopy": true, 00:21:32.545 "get_zone_info": false, 00:21:32.545 "zone_management": false, 00:21:32.545 "zone_append": false, 00:21:32.545 "compare": false, 00:21:32.545 "compare_and_write": false, 00:21:32.545 "abort": true, 00:21:32.546 "seek_hole": false, 00:21:32.546 "seek_data": false, 00:21:32.546 "copy": true, 00:21:32.546 "nvme_iov_md": false 00:21:32.546 }, 00:21:32.546 "memory_domains": [ 00:21:32.546 { 00:21:32.546 "dma_device_id": "system", 00:21:32.546 "dma_device_type": 1 00:21:32.546 }, 00:21:32.546 { 00:21:32.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.546 "dma_device_type": 2 00:21:32.546 } 00:21:32.546 ], 00:21:32.546 "driver_specific": { 00:21:32.546 "passthru": { 00:21:32.546 "name": "pt2", 00:21:32.546 "base_bdev_name": "malloc2" 00:21:32.546 } 00:21:32.546 } 00:21:32.546 }' 00:21:32.546 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.803 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:32.803 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:32.804 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.804 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:32.804 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:32.804 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:32.804 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.062 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:33.062 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.062 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.062 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:33.062 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:33.062 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:33.062 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:33.320 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:33.320 "name": "pt3", 00:21:33.320 "aliases": [ 00:21:33.320 "00000000-0000-0000-0000-000000000003" 00:21:33.320 ], 00:21:33.320 "product_name": "passthru", 00:21:33.320 "block_size": 512, 00:21:33.320 "num_blocks": 65536, 00:21:33.320 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:33.320 "assigned_rate_limits": { 00:21:33.320 "rw_ios_per_sec": 0, 00:21:33.320 "rw_mbytes_per_sec": 0, 00:21:33.320 "r_mbytes_per_sec": 0, 00:21:33.320 "w_mbytes_per_sec": 0 00:21:33.320 }, 00:21:33.320 "claimed": true, 00:21:33.320 "claim_type": "exclusive_write", 00:21:33.320 "zoned": false, 00:21:33.320 "supported_io_types": { 00:21:33.320 "read": true, 00:21:33.320 "write": true, 00:21:33.320 "unmap": true, 00:21:33.320 "flush": true, 00:21:33.320 "reset": true, 00:21:33.320 "nvme_admin": false, 00:21:33.320 "nvme_io": false, 00:21:33.320 "nvme_io_md": false, 00:21:33.320 "write_zeroes": true, 00:21:33.320 "zcopy": true, 00:21:33.320 "get_zone_info": false, 00:21:33.320 "zone_management": false, 00:21:33.320 "zone_append": false, 00:21:33.320 "compare": false, 00:21:33.320 "compare_and_write": false, 00:21:33.320 "abort": true, 00:21:33.320 "seek_hole": false, 00:21:33.320 "seek_data": false, 00:21:33.320 "copy": true, 00:21:33.320 "nvme_iov_md": false 00:21:33.320 }, 00:21:33.320 "memory_domains": [ 00:21:33.320 { 00:21:33.320 "dma_device_id": "system", 00:21:33.320 "dma_device_type": 1 00:21:33.320 }, 00:21:33.320 { 00:21:33.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.320 "dma_device_type": 2 00:21:33.320 } 00:21:33.320 ], 00:21:33.320 "driver_specific": { 00:21:33.320 "passthru": { 00:21:33.320 "name": "pt3", 00:21:33.320 "base_bdev_name": "malloc3" 00:21:33.320 } 00:21:33.320 } 00:21:33.320 }' 00:21:33.320 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.320 11:33:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.320 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:33.320 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.581 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.581 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:33.581 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.581 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.581 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:33.581 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.581 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.839 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:33.839 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:33.839 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:34.098 [2024-07-13 11:33:08.597711] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:34.098 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3d4a5371-d5dd-41b8-aee9-030a90e2bd1c 00:21:34.098 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3d4a5371-d5dd-41b8-aee9-030a90e2bd1c ']' 00:21:34.098 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:34.356 [2024-07-13 11:33:08.873548] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:34.356 [2024-07-13 11:33:08.873574] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:34.356 [2024-07-13 11:33:08.873662] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:34.356 [2024-07-13 11:33:08.873740] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:34.356 [2024-07-13 11:33:08.873751] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:21:34.356 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.356 11:33:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:34.614 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:34.614 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:34.614 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:34.614 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:34.872 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:34.872 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:34.872 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:34.872 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:35.130 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:35.130 11:33:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:35.388 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:35.645 [2024-07-13 11:33:10.239218] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:35.645 [2024-07-13 11:33:10.240954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:35.645 [2024-07-13 11:33:10.241023] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:35.645 [2024-07-13 11:33:10.241084] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:35.645 [2024-07-13 11:33:10.241166] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:35.645 [2024-07-13 11:33:10.241233] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:35.645 [2024-07-13 11:33:10.241278] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:35.645 [2024-07-13 11:33:10.241289] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:21:35.645 request: 00:21:35.645 { 00:21:35.645 "name": "raid_bdev1", 00:21:35.645 "raid_level": "raid1", 00:21:35.645 "base_bdevs": [ 00:21:35.645 "malloc1", 00:21:35.645 "malloc2", 00:21:35.645 "malloc3" 00:21:35.645 ], 00:21:35.645 "superblock": false, 00:21:35.645 "method": "bdev_raid_create", 00:21:35.645 "req_id": 1 00:21:35.645 } 00:21:35.645 Got JSON-RPC error response 00:21:35.645 response: 00:21:35.645 { 00:21:35.645 "code": -17, 00:21:35.645 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:35.645 } 00:21:35.645 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:35.645 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.645 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.645 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.645 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.645 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:35.903 [2024-07-13 11:33:10.631440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:35.903 [2024-07-13 11:33:10.631494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:35.903 [2024-07-13 11:33:10.631526] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:35.903 [2024-07-13 11:33:10.631545] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:35.903 [2024-07-13 11:33:10.633509] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:35.903 [2024-07-13 11:33:10.633550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:35.903 [2024-07-13 11:33:10.633639] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:35.903 [2024-07-13 11:33:10.633689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:35.903 pt1 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.903 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.160 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.160 "name": "raid_bdev1", 00:21:36.160 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:36.160 "strip_size_kb": 0, 00:21:36.160 "state": "configuring", 00:21:36.160 "raid_level": "raid1", 00:21:36.160 "superblock": true, 00:21:36.160 "num_base_bdevs": 3, 00:21:36.160 "num_base_bdevs_discovered": 1, 00:21:36.160 "num_base_bdevs_operational": 3, 00:21:36.160 "base_bdevs_list": [ 00:21:36.160 { 00:21:36.160 "name": "pt1", 00:21:36.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:36.161 "is_configured": true, 00:21:36.161 "data_offset": 2048, 00:21:36.161 "data_size": 63488 00:21:36.161 }, 00:21:36.161 { 00:21:36.161 "name": null, 00:21:36.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:36.161 "is_configured": false, 00:21:36.161 "data_offset": 2048, 00:21:36.161 "data_size": 63488 00:21:36.161 }, 00:21:36.161 { 00:21:36.161 "name": null, 00:21:36.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:36.161 "is_configured": false, 00:21:36.161 "data_offset": 2048, 00:21:36.161 "data_size": 63488 00:21:36.161 } 00:21:36.161 ] 00:21:36.161 }' 00:21:36.161 11:33:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.161 11:33:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.094 11:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:21:37.094 11:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:37.094 [2024-07-13 11:33:11.775640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:37.094 [2024-07-13 11:33:11.775717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.094 [2024-07-13 11:33:11.775759] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:37.094 [2024-07-13 11:33:11.775777] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.094 [2024-07-13 11:33:11.776212] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.094 [2024-07-13 11:33:11.776258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:37.094 [2024-07-13 11:33:11.776358] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:37.094 [2024-07-13 11:33:11.776405] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:37.094 pt2 00:21:37.094 11:33:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:37.352 [2024-07-13 11:33:12.047708] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.352 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:37.609 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.609 "name": "raid_bdev1", 00:21:37.609 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:37.609 "strip_size_kb": 0, 00:21:37.609 "state": "configuring", 00:21:37.609 "raid_level": "raid1", 00:21:37.609 "superblock": true, 00:21:37.609 "num_base_bdevs": 3, 00:21:37.609 "num_base_bdevs_discovered": 1, 00:21:37.609 "num_base_bdevs_operational": 3, 00:21:37.609 "base_bdevs_list": [ 00:21:37.609 { 00:21:37.609 "name": "pt1", 00:21:37.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:37.609 "is_configured": true, 00:21:37.609 "data_offset": 2048, 00:21:37.609 "data_size": 63488 00:21:37.609 }, 00:21:37.609 { 00:21:37.609 "name": null, 00:21:37.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:37.610 "is_configured": false, 00:21:37.610 "data_offset": 2048, 00:21:37.610 "data_size": 63488 00:21:37.610 }, 00:21:37.610 { 00:21:37.610 "name": null, 00:21:37.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:37.610 "is_configured": false, 00:21:37.610 "data_offset": 2048, 00:21:37.610 "data_size": 63488 00:21:37.610 } 00:21:37.610 ] 00:21:37.610 }' 00:21:37.610 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.610 11:33:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.544 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:38.544 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:38.544 11:33:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:38.544 [2024-07-13 11:33:13.225010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:38.544 [2024-07-13 11:33:13.225072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.544 [2024-07-13 11:33:13.225097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:38.544 [2024-07-13 11:33:13.225121] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.544 [2024-07-13 11:33:13.225478] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.544 [2024-07-13 11:33:13.225519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:38.544 [2024-07-13 11:33:13.225597] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:38.544 [2024-07-13 11:33:13.225619] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:38.544 pt2 00:21:38.544 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:38.544 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:38.544 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:38.803 [2024-07-13 11:33:13.473056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:38.803 [2024-07-13 11:33:13.473123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.803 [2024-07-13 11:33:13.473149] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:38.803 [2024-07-13 11:33:13.473170] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.803 [2024-07-13 11:33:13.473546] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.803 [2024-07-13 11:33:13.473582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:38.803 [2024-07-13 11:33:13.473663] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:38.803 [2024-07-13 11:33:13.473687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:38.803 [2024-07-13 11:33:13.473801] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:21:38.803 [2024-07-13 11:33:13.473821] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:38.803 [2024-07-13 11:33:13.473911] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:38.803 [2024-07-13 11:33:13.474199] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:21:38.803 [2024-07-13 11:33:13.474220] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:21:38.803 [2024-07-13 11:33:13.474344] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:38.803 pt3 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.803 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.062 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.062 "name": "raid_bdev1", 00:21:39.062 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:39.062 "strip_size_kb": 0, 00:21:39.062 "state": "online", 00:21:39.062 "raid_level": "raid1", 00:21:39.062 "superblock": true, 00:21:39.062 "num_base_bdevs": 3, 00:21:39.062 "num_base_bdevs_discovered": 3, 00:21:39.062 "num_base_bdevs_operational": 3, 00:21:39.062 "base_bdevs_list": [ 00:21:39.062 { 00:21:39.062 "name": "pt1", 00:21:39.062 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.062 "is_configured": true, 00:21:39.062 "data_offset": 2048, 00:21:39.062 "data_size": 63488 00:21:39.062 }, 00:21:39.062 { 00:21:39.062 "name": "pt2", 00:21:39.062 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.062 "is_configured": true, 00:21:39.062 "data_offset": 2048, 00:21:39.062 "data_size": 63488 00:21:39.062 }, 00:21:39.062 { 00:21:39.062 "name": "pt3", 00:21:39.062 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.062 "is_configured": true, 00:21:39.062 "data_offset": 2048, 00:21:39.062 "data_size": 63488 00:21:39.062 } 00:21:39.062 ] 00:21:39.062 }' 00:21:39.062 11:33:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.062 11:33:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:39.998 [2024-07-13 11:33:14.653518] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:39.998 "name": "raid_bdev1", 00:21:39.998 "aliases": [ 00:21:39.998 "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c" 00:21:39.998 ], 00:21:39.998 "product_name": "Raid Volume", 00:21:39.998 "block_size": 512, 00:21:39.998 "num_blocks": 63488, 00:21:39.998 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:39.998 "assigned_rate_limits": { 00:21:39.998 "rw_ios_per_sec": 0, 00:21:39.998 "rw_mbytes_per_sec": 0, 00:21:39.998 "r_mbytes_per_sec": 0, 00:21:39.998 "w_mbytes_per_sec": 0 00:21:39.998 }, 00:21:39.998 "claimed": false, 00:21:39.998 "zoned": false, 00:21:39.998 "supported_io_types": { 00:21:39.998 "read": true, 00:21:39.998 "write": true, 00:21:39.998 "unmap": false, 00:21:39.998 "flush": false, 00:21:39.998 "reset": true, 00:21:39.998 "nvme_admin": false, 00:21:39.998 "nvme_io": false, 00:21:39.998 "nvme_io_md": false, 00:21:39.998 "write_zeroes": true, 00:21:39.998 "zcopy": false, 00:21:39.998 "get_zone_info": false, 00:21:39.998 "zone_management": false, 00:21:39.998 "zone_append": false, 00:21:39.998 "compare": false, 00:21:39.998 "compare_and_write": false, 00:21:39.998 "abort": false, 00:21:39.998 "seek_hole": false, 00:21:39.998 "seek_data": false, 00:21:39.998 "copy": false, 00:21:39.998 "nvme_iov_md": false 00:21:39.998 }, 00:21:39.998 "memory_domains": [ 00:21:39.998 { 00:21:39.998 "dma_device_id": "system", 00:21:39.998 "dma_device_type": 1 00:21:39.998 }, 00:21:39.998 { 00:21:39.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.998 "dma_device_type": 2 00:21:39.998 }, 00:21:39.998 { 00:21:39.998 "dma_device_id": "system", 00:21:39.998 "dma_device_type": 1 00:21:39.998 }, 00:21:39.998 { 00:21:39.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.998 "dma_device_type": 2 00:21:39.998 }, 00:21:39.998 { 00:21:39.998 "dma_device_id": "system", 00:21:39.998 "dma_device_type": 1 00:21:39.998 }, 00:21:39.998 { 00:21:39.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.998 "dma_device_type": 2 00:21:39.998 } 00:21:39.998 ], 00:21:39.998 "driver_specific": { 00:21:39.998 "raid": { 00:21:39.998 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:39.998 "strip_size_kb": 0, 00:21:39.998 "state": "online", 00:21:39.998 "raid_level": "raid1", 00:21:39.998 "superblock": true, 00:21:39.998 "num_base_bdevs": 3, 00:21:39.998 "num_base_bdevs_discovered": 3, 00:21:39.998 "num_base_bdevs_operational": 3, 00:21:39.998 "base_bdevs_list": [ 00:21:39.998 { 00:21:39.998 "name": "pt1", 00:21:39.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.998 "is_configured": true, 00:21:39.998 "data_offset": 2048, 00:21:39.998 "data_size": 63488 00:21:39.998 }, 00:21:39.998 { 00:21:39.998 "name": "pt2", 00:21:39.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.998 "is_configured": true, 00:21:39.998 "data_offset": 2048, 00:21:39.998 "data_size": 63488 00:21:39.998 }, 00:21:39.998 { 00:21:39.998 "name": "pt3", 00:21:39.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:39.998 "is_configured": true, 00:21:39.998 "data_offset": 2048, 00:21:39.998 "data_size": 63488 00:21:39.998 } 00:21:39.998 ] 00:21:39.998 } 00:21:39.998 } 00:21:39.998 }' 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:39.998 pt2 00:21:39.998 pt3' 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:39.998 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:40.261 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:40.261 "name": "pt1", 00:21:40.261 "aliases": [ 00:21:40.261 "00000000-0000-0000-0000-000000000001" 00:21:40.261 ], 00:21:40.261 "product_name": "passthru", 00:21:40.261 "block_size": 512, 00:21:40.261 "num_blocks": 65536, 00:21:40.261 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.261 "assigned_rate_limits": { 00:21:40.261 "rw_ios_per_sec": 0, 00:21:40.261 "rw_mbytes_per_sec": 0, 00:21:40.261 "r_mbytes_per_sec": 0, 00:21:40.261 "w_mbytes_per_sec": 0 00:21:40.261 }, 00:21:40.261 "claimed": true, 00:21:40.261 "claim_type": "exclusive_write", 00:21:40.261 "zoned": false, 00:21:40.261 "supported_io_types": { 00:21:40.261 "read": true, 00:21:40.261 "write": true, 00:21:40.261 "unmap": true, 00:21:40.261 "flush": true, 00:21:40.261 "reset": true, 00:21:40.261 "nvme_admin": false, 00:21:40.261 "nvme_io": false, 00:21:40.261 "nvme_io_md": false, 00:21:40.261 "write_zeroes": true, 00:21:40.261 "zcopy": true, 00:21:40.261 "get_zone_info": false, 00:21:40.261 "zone_management": false, 00:21:40.261 "zone_append": false, 00:21:40.261 "compare": false, 00:21:40.261 "compare_and_write": false, 00:21:40.261 "abort": true, 00:21:40.261 "seek_hole": false, 00:21:40.261 "seek_data": false, 00:21:40.261 "copy": true, 00:21:40.261 "nvme_iov_md": false 00:21:40.261 }, 00:21:40.261 "memory_domains": [ 00:21:40.261 { 00:21:40.261 "dma_device_id": "system", 00:21:40.261 "dma_device_type": 1 00:21:40.261 }, 00:21:40.261 { 00:21:40.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.261 "dma_device_type": 2 00:21:40.261 } 00:21:40.261 ], 00:21:40.261 "driver_specific": { 00:21:40.261 "passthru": { 00:21:40.261 "name": "pt1", 00:21:40.261 "base_bdev_name": "malloc1" 00:21:40.261 } 00:21:40.261 } 00:21:40.261 }' 00:21:40.261 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.261 11:33:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:40.529 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:40.529 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.529 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:40.529 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:40.529 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.529 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:40.529 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:40.529 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.787 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:40.787 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:40.787 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:40.787 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:40.787 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:41.046 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:41.046 "name": "pt2", 00:21:41.046 "aliases": [ 00:21:41.046 "00000000-0000-0000-0000-000000000002" 00:21:41.046 ], 00:21:41.046 "product_name": "passthru", 00:21:41.046 "block_size": 512, 00:21:41.046 "num_blocks": 65536, 00:21:41.046 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.046 "assigned_rate_limits": { 00:21:41.046 "rw_ios_per_sec": 0, 00:21:41.046 "rw_mbytes_per_sec": 0, 00:21:41.046 "r_mbytes_per_sec": 0, 00:21:41.046 "w_mbytes_per_sec": 0 00:21:41.046 }, 00:21:41.046 "claimed": true, 00:21:41.046 "claim_type": "exclusive_write", 00:21:41.046 "zoned": false, 00:21:41.046 "supported_io_types": { 00:21:41.046 "read": true, 00:21:41.046 "write": true, 00:21:41.046 "unmap": true, 00:21:41.046 "flush": true, 00:21:41.046 "reset": true, 00:21:41.046 "nvme_admin": false, 00:21:41.046 "nvme_io": false, 00:21:41.046 "nvme_io_md": false, 00:21:41.046 "write_zeroes": true, 00:21:41.046 "zcopy": true, 00:21:41.046 "get_zone_info": false, 00:21:41.046 "zone_management": false, 00:21:41.046 "zone_append": false, 00:21:41.046 "compare": false, 00:21:41.046 "compare_and_write": false, 00:21:41.046 "abort": true, 00:21:41.046 "seek_hole": false, 00:21:41.046 "seek_data": false, 00:21:41.046 "copy": true, 00:21:41.046 "nvme_iov_md": false 00:21:41.046 }, 00:21:41.046 "memory_domains": [ 00:21:41.046 { 00:21:41.046 "dma_device_id": "system", 00:21:41.046 "dma_device_type": 1 00:21:41.046 }, 00:21:41.046 { 00:21:41.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.046 "dma_device_type": 2 00:21:41.046 } 00:21:41.046 ], 00:21:41.046 "driver_specific": { 00:21:41.046 "passthru": { 00:21:41.046 "name": "pt2", 00:21:41.046 "base_bdev_name": "malloc2" 00:21:41.046 } 00:21:41.046 } 00:21:41.046 }' 00:21:41.046 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.046 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.046 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:41.046 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.046 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.304 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:41.304 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.304 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:41.304 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:41.304 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.304 11:33:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:41.563 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:41.563 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:41.563 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:41.563 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:41.563 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:41.563 "name": "pt3", 00:21:41.563 "aliases": [ 00:21:41.563 "00000000-0000-0000-0000-000000000003" 00:21:41.563 ], 00:21:41.563 "product_name": "passthru", 00:21:41.563 "block_size": 512, 00:21:41.563 "num_blocks": 65536, 00:21:41.563 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:41.563 "assigned_rate_limits": { 00:21:41.563 "rw_ios_per_sec": 0, 00:21:41.563 "rw_mbytes_per_sec": 0, 00:21:41.563 "r_mbytes_per_sec": 0, 00:21:41.563 "w_mbytes_per_sec": 0 00:21:41.563 }, 00:21:41.563 "claimed": true, 00:21:41.563 "claim_type": "exclusive_write", 00:21:41.563 "zoned": false, 00:21:41.563 "supported_io_types": { 00:21:41.563 "read": true, 00:21:41.563 "write": true, 00:21:41.563 "unmap": true, 00:21:41.563 "flush": true, 00:21:41.563 "reset": true, 00:21:41.563 "nvme_admin": false, 00:21:41.563 "nvme_io": false, 00:21:41.563 "nvme_io_md": false, 00:21:41.563 "write_zeroes": true, 00:21:41.563 "zcopy": true, 00:21:41.563 "get_zone_info": false, 00:21:41.563 "zone_management": false, 00:21:41.563 "zone_append": false, 00:21:41.563 "compare": false, 00:21:41.563 "compare_and_write": false, 00:21:41.563 "abort": true, 00:21:41.563 "seek_hole": false, 00:21:41.563 "seek_data": false, 00:21:41.563 "copy": true, 00:21:41.563 "nvme_iov_md": false 00:21:41.563 }, 00:21:41.563 "memory_domains": [ 00:21:41.563 { 00:21:41.563 "dma_device_id": "system", 00:21:41.563 "dma_device_type": 1 00:21:41.563 }, 00:21:41.563 { 00:21:41.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.563 "dma_device_type": 2 00:21:41.563 } 00:21:41.563 ], 00:21:41.563 "driver_specific": { 00:21:41.563 "passthru": { 00:21:41.563 "name": "pt3", 00:21:41.563 "base_bdev_name": "malloc3" 00:21:41.563 } 00:21:41.563 } 00:21:41.563 }' 00:21:41.563 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.821 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:41.821 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:41.821 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.821 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:41.821 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:41.821 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.080 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:42.080 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:42.080 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.080 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:42.080 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:42.080 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:42.080 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:42.338 [2024-07-13 11:33:16.929901] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.338 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3d4a5371-d5dd-41b8-aee9-030a90e2bd1c '!=' 3d4a5371-d5dd-41b8-aee9-030a90e2bd1c ']' 00:21:42.338 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:42.338 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:42.338 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:42.338 11:33:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:42.596 [2024-07-13 11:33:17.161762] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.596 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.855 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.855 "name": "raid_bdev1", 00:21:42.855 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:42.855 "strip_size_kb": 0, 00:21:42.855 "state": "online", 00:21:42.855 "raid_level": "raid1", 00:21:42.855 "superblock": true, 00:21:42.855 "num_base_bdevs": 3, 00:21:42.855 "num_base_bdevs_discovered": 2, 00:21:42.855 "num_base_bdevs_operational": 2, 00:21:42.855 "base_bdevs_list": [ 00:21:42.855 { 00:21:42.855 "name": null, 00:21:42.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.855 "is_configured": false, 00:21:42.855 "data_offset": 2048, 00:21:42.855 "data_size": 63488 00:21:42.855 }, 00:21:42.855 { 00:21:42.855 "name": "pt2", 00:21:42.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.855 "is_configured": true, 00:21:42.855 "data_offset": 2048, 00:21:42.855 "data_size": 63488 00:21:42.855 }, 00:21:42.855 { 00:21:42.855 "name": "pt3", 00:21:42.855 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:42.855 "is_configured": true, 00:21:42.855 "data_offset": 2048, 00:21:42.855 "data_size": 63488 00:21:42.855 } 00:21:42.855 ] 00:21:42.855 }' 00:21:42.855 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.855 11:33:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.421 11:33:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:43.679 [2024-07-13 11:33:18.181909] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.679 [2024-07-13 11:33:18.181937] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.679 [2024-07-13 11:33:18.181988] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.679 [2024-07-13 11:33:18.182041] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.679 [2024-07-13 11:33:18.182052] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:21:43.679 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:43.679 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:43.937 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:43.937 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:43.937 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:43.937 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:43.937 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:43.937 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:43.937 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:43.937 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:44.196 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:44.196 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:44.196 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:44.196 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:44.196 11:33:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:44.454 [2024-07-13 11:33:19.058025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:44.454 [2024-07-13 11:33:19.058095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.454 [2024-07-13 11:33:19.058126] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:44.454 [2024-07-13 11:33:19.058143] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.454 [2024-07-13 11:33:19.060274] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.454 [2024-07-13 11:33:19.060318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:44.454 [2024-07-13 11:33:19.060405] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:44.454 [2024-07-13 11:33:19.060449] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:44.454 pt2 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.454 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.712 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.712 "name": "raid_bdev1", 00:21:44.712 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:44.712 "strip_size_kb": 0, 00:21:44.712 "state": "configuring", 00:21:44.712 "raid_level": "raid1", 00:21:44.712 "superblock": true, 00:21:44.712 "num_base_bdevs": 3, 00:21:44.712 "num_base_bdevs_discovered": 1, 00:21:44.712 "num_base_bdevs_operational": 2, 00:21:44.712 "base_bdevs_list": [ 00:21:44.712 { 00:21:44.712 "name": null, 00:21:44.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.712 "is_configured": false, 00:21:44.712 "data_offset": 2048, 00:21:44.712 "data_size": 63488 00:21:44.712 }, 00:21:44.712 { 00:21:44.712 "name": "pt2", 00:21:44.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:44.712 "is_configured": true, 00:21:44.712 "data_offset": 2048, 00:21:44.712 "data_size": 63488 00:21:44.712 }, 00:21:44.712 { 00:21:44.712 "name": null, 00:21:44.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:44.712 "is_configured": false, 00:21:44.712 "data_offset": 2048, 00:21:44.712 "data_size": 63488 00:21:44.712 } 00:21:44.712 ] 00:21:44.712 }' 00:21:44.712 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.712 11:33:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.279 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:21:45.279 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:45.279 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:21:45.279 11:33:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:45.537 [2024-07-13 11:33:20.230208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:45.537 [2024-07-13 11:33:20.230268] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.537 [2024-07-13 11:33:20.230301] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:45.537 [2024-07-13 11:33:20.230325] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.537 [2024-07-13 11:33:20.230718] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.537 [2024-07-13 11:33:20.230754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:45.537 [2024-07-13 11:33:20.230833] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:45.537 [2024-07-13 11:33:20.230871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:45.537 [2024-07-13 11:33:20.230975] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:21:45.537 [2024-07-13 11:33:20.230992] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:45.537 [2024-07-13 11:33:20.231088] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:45.537 [2024-07-13 11:33:20.231379] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:21:45.537 [2024-07-13 11:33:20.231399] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:21:45.537 [2024-07-13 11:33:20.231511] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.537 pt3 00:21:45.537 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:45.537 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:45.537 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:45.537 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:45.537 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:45.537 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:45.537 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.537 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.538 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.538 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.538 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.538 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.796 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.796 "name": "raid_bdev1", 00:21:45.796 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:45.796 "strip_size_kb": 0, 00:21:45.796 "state": "online", 00:21:45.796 "raid_level": "raid1", 00:21:45.796 "superblock": true, 00:21:45.796 "num_base_bdevs": 3, 00:21:45.796 "num_base_bdevs_discovered": 2, 00:21:45.796 "num_base_bdevs_operational": 2, 00:21:45.796 "base_bdevs_list": [ 00:21:45.796 { 00:21:45.796 "name": null, 00:21:45.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.796 "is_configured": false, 00:21:45.796 "data_offset": 2048, 00:21:45.796 "data_size": 63488 00:21:45.796 }, 00:21:45.796 { 00:21:45.796 "name": "pt2", 00:21:45.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:45.796 "is_configured": true, 00:21:45.796 "data_offset": 2048, 00:21:45.796 "data_size": 63488 00:21:45.796 }, 00:21:45.796 { 00:21:45.796 "name": "pt3", 00:21:45.796 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:45.796 "is_configured": true, 00:21:45.796 "data_offset": 2048, 00:21:45.796 "data_size": 63488 00:21:45.796 } 00:21:45.796 ] 00:21:45.796 }' 00:21:45.796 11:33:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.796 11:33:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.362 11:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:46.620 [2024-07-13 11:33:21.326390] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:46.620 [2024-07-13 11:33:21.326412] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:46.620 [2024-07-13 11:33:21.326459] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:46.620 [2024-07-13 11:33:21.326506] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:46.620 [2024-07-13 11:33:21.326515] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:21:46.620 11:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.620 11:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:46.878 11:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:46.878 11:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:46.878 11:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:21:46.878 11:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:21:46.878 11:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:47.137 11:33:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:47.395 [2024-07-13 11:33:22.110501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:47.395 [2024-07-13 11:33:22.110582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:47.395 [2024-07-13 11:33:22.110617] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:47.395 [2024-07-13 11:33:22.110635] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:47.395 [2024-07-13 11:33:22.112500] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:47.396 [2024-07-13 11:33:22.112551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:47.396 [2024-07-13 11:33:22.112646] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:47.396 [2024-07-13 11:33:22.112688] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:47.396 [2024-07-13 11:33:22.112858] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:47.396 [2024-07-13 11:33:22.112882] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:47.396 [2024-07-13 11:33:22.112906] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:21:47.396 [2024-07-13 11:33:22.112972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:47.396 pt1 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.396 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:47.654 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:47.654 "name": "raid_bdev1", 00:21:47.654 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:47.654 "strip_size_kb": 0, 00:21:47.654 "state": "configuring", 00:21:47.654 "raid_level": "raid1", 00:21:47.654 "superblock": true, 00:21:47.654 "num_base_bdevs": 3, 00:21:47.654 "num_base_bdevs_discovered": 1, 00:21:47.654 "num_base_bdevs_operational": 2, 00:21:47.654 "base_bdevs_list": [ 00:21:47.654 { 00:21:47.654 "name": null, 00:21:47.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:47.654 "is_configured": false, 00:21:47.654 "data_offset": 2048, 00:21:47.654 "data_size": 63488 00:21:47.654 }, 00:21:47.654 { 00:21:47.654 "name": "pt2", 00:21:47.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:47.654 "is_configured": true, 00:21:47.654 "data_offset": 2048, 00:21:47.654 "data_size": 63488 00:21:47.654 }, 00:21:47.654 { 00:21:47.654 "name": null, 00:21:47.654 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:47.654 "is_configured": false, 00:21:47.654 "data_offset": 2048, 00:21:47.654 "data_size": 63488 00:21:47.654 } 00:21:47.654 ] 00:21:47.654 }' 00:21:47.654 11:33:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:47.654 11:33:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.589 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:48.589 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:48.589 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:21:48.589 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:48.846 [2024-07-13 11:33:23.494765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:48.846 [2024-07-13 11:33:23.494830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:48.846 [2024-07-13 11:33:23.494871] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:48.846 [2024-07-13 11:33:23.494898] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:48.846 [2024-07-13 11:33:23.495276] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:48.846 [2024-07-13 11:33:23.495321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:48.846 [2024-07-13 11:33:23.495400] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:48.846 [2024-07-13 11:33:23.495424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:48.846 [2024-07-13 11:33:23.495530] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:21:48.846 [2024-07-13 11:33:23.495550] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:48.846 [2024-07-13 11:33:23.495656] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:48.846 [2024-07-13 11:33:23.495960] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:21:48.846 [2024-07-13 11:33:23.495982] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:21:48.846 [2024-07-13 11:33:23.496096] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.846 pt3 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.846 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.104 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.104 "name": "raid_bdev1", 00:21:49.104 "uuid": "3d4a5371-d5dd-41b8-aee9-030a90e2bd1c", 00:21:49.104 "strip_size_kb": 0, 00:21:49.104 "state": "online", 00:21:49.104 "raid_level": "raid1", 00:21:49.104 "superblock": true, 00:21:49.104 "num_base_bdevs": 3, 00:21:49.104 "num_base_bdevs_discovered": 2, 00:21:49.104 "num_base_bdevs_operational": 2, 00:21:49.104 "base_bdevs_list": [ 00:21:49.104 { 00:21:49.104 "name": null, 00:21:49.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.104 "is_configured": false, 00:21:49.104 "data_offset": 2048, 00:21:49.104 "data_size": 63488 00:21:49.104 }, 00:21:49.104 { 00:21:49.104 "name": "pt2", 00:21:49.104 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:49.104 "is_configured": true, 00:21:49.104 "data_offset": 2048, 00:21:49.104 "data_size": 63488 00:21:49.104 }, 00:21:49.104 { 00:21:49.104 "name": "pt3", 00:21:49.104 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:49.104 "is_configured": true, 00:21:49.104 "data_offset": 2048, 00:21:49.104 "data_size": 63488 00:21:49.104 } 00:21:49.104 ] 00:21:49.104 }' 00:21:49.104 11:33:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.104 11:33:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.039 11:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:50.039 11:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:50.039 11:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:21:50.039 11:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:50.039 11:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:21:50.299 [2024-07-13 11:33:24.895605] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3d4a5371-d5dd-41b8-aee9-030a90e2bd1c '!=' 3d4a5371-d5dd-41b8-aee9-030a90e2bd1c ']' 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 133249 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 133249 ']' 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 133249 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 133249 00:21:50.299 killing process with pid 133249 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 133249' 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 133249 00:21:50.299 11:33:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 133249 00:21:50.299 [2024-07-13 11:33:24.929206] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:50.299 [2024-07-13 11:33:24.929292] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:50.299 [2024-07-13 11:33:24.929344] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:50.299 [2024-07-13 11:33:24.929364] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:21:50.557 [2024-07-13 11:33:25.121389] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:51.493 ************************************ 00:21:51.493 END TEST raid_superblock_test 00:21:51.493 ************************************ 00:21:51.493 11:33:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:51.493 00:21:51.493 real 0m23.093s 00:21:51.493 user 0m43.537s 00:21:51.493 sys 0m2.432s 00:21:51.493 11:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:51.493 11:33:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.493 11:33:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:51.493 11:33:26 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:51.493 11:33:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:51.493 11:33:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:51.493 11:33:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:51.493 ************************************ 00:21:51.493 START TEST raid_read_error_test 00:21:51.493 ************************************ 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Yv9tExoO9b 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=134024 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 134024 /var/tmp/spdk-raid.sock 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 134024 ']' 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:51.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.493 11:33:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.493 [2024-07-13 11:33:26.162765] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:51.493 [2024-07-13 11:33:26.162971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134024 ] 00:21:51.752 [2024-07-13 11:33:26.332440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.011 [2024-07-13 11:33:26.587983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.270 [2024-07-13 11:33:26.776608] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.528 11:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.528 11:33:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:52.528 11:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:52.528 11:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:52.787 BaseBdev1_malloc 00:21:52.787 11:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:52.787 true 00:21:52.787 11:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:53.046 [2024-07-13 11:33:27.679970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:53.046 [2024-07-13 11:33:27.680053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.046 [2024-07-13 11:33:27.680088] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:53.046 [2024-07-13 11:33:27.680122] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.046 [2024-07-13 11:33:27.682053] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.046 [2024-07-13 11:33:27.682095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:53.046 BaseBdev1 00:21:53.046 11:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:53.046 11:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:53.305 BaseBdev2_malloc 00:21:53.305 11:33:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:53.563 true 00:21:53.564 11:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:53.822 [2024-07-13 11:33:28.393116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:53.822 [2024-07-13 11:33:28.393201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:53.822 [2024-07-13 11:33:28.393236] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:21:53.822 [2024-07-13 11:33:28.393257] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:53.822 [2024-07-13 11:33:28.395413] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:53.822 [2024-07-13 11:33:28.395455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:53.822 BaseBdev2 00:21:53.822 11:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:53.822 11:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:54.081 BaseBdev3_malloc 00:21:54.081 11:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:54.081 true 00:21:54.339 11:33:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:54.598 [2024-07-13 11:33:29.102597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:54.598 [2024-07-13 11:33:29.102682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.598 [2024-07-13 11:33:29.102720] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:54.598 [2024-07-13 11:33:29.102746] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.598 [2024-07-13 11:33:29.105058] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.598 [2024-07-13 11:33:29.105109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:54.598 BaseBdev3 00:21:54.598 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:54.857 [2024-07-13 11:33:29.354699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.857 [2024-07-13 11:33:29.356343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:54.857 [2024-07-13 11:33:29.356434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:54.857 [2024-07-13 11:33:29.356661] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:21:54.857 [2024-07-13 11:33:29.356682] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:54.857 [2024-07-13 11:33:29.356781] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:21:54.857 [2024-07-13 11:33:29.357146] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:21:54.857 [2024-07-13 11:33:29.357169] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:21:54.857 [2024-07-13 11:33:29.357307] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:54.857 "name": "raid_bdev1", 00:21:54.857 "uuid": "194a87ae-5aa7-4730-b993-b59208166bc1", 00:21:54.857 "strip_size_kb": 0, 00:21:54.857 "state": "online", 00:21:54.857 "raid_level": "raid1", 00:21:54.857 "superblock": true, 00:21:54.857 "num_base_bdevs": 3, 00:21:54.857 "num_base_bdevs_discovered": 3, 00:21:54.857 "num_base_bdevs_operational": 3, 00:21:54.857 "base_bdevs_list": [ 00:21:54.857 { 00:21:54.857 "name": "BaseBdev1", 00:21:54.857 "uuid": "6e898ea7-547e-5704-9295-700202770652", 00:21:54.857 "is_configured": true, 00:21:54.857 "data_offset": 2048, 00:21:54.857 "data_size": 63488 00:21:54.857 }, 00:21:54.857 { 00:21:54.857 "name": "BaseBdev2", 00:21:54.857 "uuid": "6bcd42f7-1a66-5908-9b0e-899c433f13ac", 00:21:54.857 "is_configured": true, 00:21:54.857 "data_offset": 2048, 00:21:54.857 "data_size": 63488 00:21:54.857 }, 00:21:54.857 { 00:21:54.857 "name": "BaseBdev3", 00:21:54.857 "uuid": "120c419b-95ef-56eb-8baa-81f0b3027cfc", 00:21:54.857 "is_configured": true, 00:21:54.857 "data_offset": 2048, 00:21:54.857 "data_size": 63488 00:21:54.857 } 00:21:54.857 ] 00:21:54.857 }' 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:54.857 11:33:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.424 11:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:55.424 11:33:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:55.683 [2024-07-13 11:33:30.223902] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.620 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.878 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.878 "name": "raid_bdev1", 00:21:56.878 "uuid": "194a87ae-5aa7-4730-b993-b59208166bc1", 00:21:56.878 "strip_size_kb": 0, 00:21:56.878 "state": "online", 00:21:56.878 "raid_level": "raid1", 00:21:56.878 "superblock": true, 00:21:56.878 "num_base_bdevs": 3, 00:21:56.878 "num_base_bdevs_discovered": 3, 00:21:56.878 "num_base_bdevs_operational": 3, 00:21:56.878 "base_bdevs_list": [ 00:21:56.878 { 00:21:56.878 "name": "BaseBdev1", 00:21:56.878 "uuid": "6e898ea7-547e-5704-9295-700202770652", 00:21:56.878 "is_configured": true, 00:21:56.878 "data_offset": 2048, 00:21:56.878 "data_size": 63488 00:21:56.878 }, 00:21:56.878 { 00:21:56.878 "name": "BaseBdev2", 00:21:56.878 "uuid": "6bcd42f7-1a66-5908-9b0e-899c433f13ac", 00:21:56.878 "is_configured": true, 00:21:56.878 "data_offset": 2048, 00:21:56.878 "data_size": 63488 00:21:56.878 }, 00:21:56.878 { 00:21:56.878 "name": "BaseBdev3", 00:21:56.878 "uuid": "120c419b-95ef-56eb-8baa-81f0b3027cfc", 00:21:56.878 "is_configured": true, 00:21:56.878 "data_offset": 2048, 00:21:56.878 "data_size": 63488 00:21:56.878 } 00:21:56.878 ] 00:21:56.878 }' 00:21:56.878 11:33:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.878 11:33:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:57.813 [2024-07-13 11:33:32.389726] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:57.813 [2024-07-13 11:33:32.389791] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:57.813 [2024-07-13 11:33:32.392379] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.813 [2024-07-13 11:33:32.392430] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.813 [2024-07-13 11:33:32.392531] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.813 [2024-07-13 11:33:32.392542] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:21:57.813 0 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 134024 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 134024 ']' 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 134024 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134024 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134024' 00:21:57.813 killing process with pid 134024 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 134024 00:21:57.813 11:33:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 134024 00:21:57.813 [2024-07-13 11:33:32.432776] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:58.072 [2024-07-13 11:33:32.594075] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Yv9tExoO9b 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:59.006 00:21:59.006 real 0m7.577s 00:21:59.006 user 0m11.533s 00:21:59.006 sys 0m0.857s 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.006 11:33:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.006 ************************************ 00:21:59.006 END TEST raid_read_error_test 00:21:59.006 ************************************ 00:21:59.006 11:33:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:59.006 11:33:33 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:59.006 11:33:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:59.006 11:33:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.006 11:33:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:59.006 ************************************ 00:21:59.006 START TEST raid_write_error_test 00:21:59.006 ************************************ 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.fwFiXTGUrZ 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=134246 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 134246 /var/tmp/spdk-raid.sock 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 134246 ']' 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:59.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:59.006 11:33:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.265 [2024-07-13 11:33:33.792386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:59.265 [2024-07-13 11:33:33.792580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134246 ] 00:21:59.265 [2024-07-13 11:33:33.945363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.523 [2024-07-13 11:33:34.141967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.782 [2024-07-13 11:33:34.327544] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:00.041 11:33:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:00.041 11:33:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:22:00.041 11:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:00.041 11:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:00.300 BaseBdev1_malloc 00:22:00.300 11:33:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:00.560 true 00:22:00.560 11:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:00.819 [2024-07-13 11:33:35.340736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:00.819 [2024-07-13 11:33:35.340840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.820 [2024-07-13 11:33:35.340877] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:00.820 [2024-07-13 11:33:35.340897] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.820 [2024-07-13 11:33:35.343173] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.820 [2024-07-13 11:33:35.343218] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:00.820 BaseBdev1 00:22:00.820 11:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:00.820 11:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:01.078 BaseBdev2_malloc 00:22:01.078 11:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:01.078 true 00:22:01.078 11:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:01.337 [2024-07-13 11:33:35.935075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:01.337 [2024-07-13 11:33:35.935167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.337 [2024-07-13 11:33:35.935206] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:01.337 [2024-07-13 11:33:35.935227] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.337 [2024-07-13 11:33:35.937418] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.337 [2024-07-13 11:33:35.937465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:01.337 BaseBdev2 00:22:01.337 11:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:01.337 11:33:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:01.596 BaseBdev3_malloc 00:22:01.596 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:01.596 true 00:22:01.854 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:01.854 [2024-07-13 11:33:36.523730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:01.854 [2024-07-13 11:33:36.523819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.854 [2024-07-13 11:33:36.523859] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:01.854 [2024-07-13 11:33:36.523887] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.854 [2024-07-13 11:33:36.525828] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.854 [2024-07-13 11:33:36.525876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:01.854 BaseBdev3 00:22:01.854 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:22:02.113 [2024-07-13 11:33:36.711808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:02.113 [2024-07-13 11:33:36.713658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:02.113 [2024-07-13 11:33:36.713744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:02.113 [2024-07-13 11:33:36.713963] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:02.113 [2024-07-13 11:33:36.713984] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:02.113 [2024-07-13 11:33:36.714095] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:02.113 [2024-07-13 11:33:36.714466] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:02.113 [2024-07-13 11:33:36.714487] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:02.113 [2024-07-13 11:33:36.714622] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.113 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.372 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.372 "name": "raid_bdev1", 00:22:02.372 "uuid": "b9586f2a-583d-41b5-8bae-99af1b891633", 00:22:02.372 "strip_size_kb": 0, 00:22:02.372 "state": "online", 00:22:02.372 "raid_level": "raid1", 00:22:02.372 "superblock": true, 00:22:02.372 "num_base_bdevs": 3, 00:22:02.372 "num_base_bdevs_discovered": 3, 00:22:02.372 "num_base_bdevs_operational": 3, 00:22:02.372 "base_bdevs_list": [ 00:22:02.372 { 00:22:02.372 "name": "BaseBdev1", 00:22:02.372 "uuid": "211c7ff1-53c9-533a-be24-96514b5769eb", 00:22:02.372 "is_configured": true, 00:22:02.372 "data_offset": 2048, 00:22:02.372 "data_size": 63488 00:22:02.372 }, 00:22:02.372 { 00:22:02.372 "name": "BaseBdev2", 00:22:02.372 "uuid": "63ab4da6-89a1-5fb3-9670-3a18b2e56506", 00:22:02.372 "is_configured": true, 00:22:02.372 "data_offset": 2048, 00:22:02.372 "data_size": 63488 00:22:02.372 }, 00:22:02.372 { 00:22:02.372 "name": "BaseBdev3", 00:22:02.372 "uuid": "0d7cb8c8-024e-52b3-94b0-4cf40eba1950", 00:22:02.372 "is_configured": true, 00:22:02.372 "data_offset": 2048, 00:22:02.372 "data_size": 63488 00:22:02.372 } 00:22:02.372 ] 00:22:02.372 }' 00:22:02.372 11:33:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.372 11:33:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.939 11:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:02.939 11:33:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:02.939 [2024-07-13 11:33:37.629069] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:03.874 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:04.131 [2024-07-13 11:33:38.782605] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:22:04.131 [2024-07-13 11:33:38.782740] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:04.131 [2024-07-13 11:33:38.782996] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.131 11:33:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.391 11:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.391 "name": "raid_bdev1", 00:22:04.391 "uuid": "b9586f2a-583d-41b5-8bae-99af1b891633", 00:22:04.391 "strip_size_kb": 0, 00:22:04.391 "state": "online", 00:22:04.391 "raid_level": "raid1", 00:22:04.391 "superblock": true, 00:22:04.391 "num_base_bdevs": 3, 00:22:04.391 "num_base_bdevs_discovered": 2, 00:22:04.391 "num_base_bdevs_operational": 2, 00:22:04.391 "base_bdevs_list": [ 00:22:04.391 { 00:22:04.391 "name": null, 00:22:04.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.391 "is_configured": false, 00:22:04.391 "data_offset": 2048, 00:22:04.391 "data_size": 63488 00:22:04.391 }, 00:22:04.391 { 00:22:04.391 "name": "BaseBdev2", 00:22:04.391 "uuid": "63ab4da6-89a1-5fb3-9670-3a18b2e56506", 00:22:04.391 "is_configured": true, 00:22:04.391 "data_offset": 2048, 00:22:04.391 "data_size": 63488 00:22:04.391 }, 00:22:04.391 { 00:22:04.391 "name": "BaseBdev3", 00:22:04.391 "uuid": "0d7cb8c8-024e-52b3-94b0-4cf40eba1950", 00:22:04.391 "is_configured": true, 00:22:04.391 "data_offset": 2048, 00:22:04.391 "data_size": 63488 00:22:04.391 } 00:22:04.391 ] 00:22:04.391 }' 00:22:04.391 11:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.391 11:33:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.351 11:33:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:05.351 [2024-07-13 11:33:40.012666] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:05.351 [2024-07-13 11:33:40.012734] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:05.351 [2024-07-13 11:33:40.015327] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.351 [2024-07-13 11:33:40.015372] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.351 [2024-07-13 11:33:40.015449] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.351 [2024-07-13 11:33:40.015460] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:22:05.351 0 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 134246 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 134246 ']' 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 134246 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134246 00:22:05.351 killing process with pid 134246 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134246' 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 134246 00:22:05.351 11:33:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 134246 00:22:05.351 [2024-07-13 11:33:40.054441] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.609 [2024-07-13 11:33:40.212518] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:06.543 11:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.fwFiXTGUrZ 00:22:06.544 11:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:06.544 11:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:06.804 ************************************ 00:22:06.804 END TEST raid_write_error_test 00:22:06.804 ************************************ 00:22:06.804 11:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:22:06.804 11:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:22:06.804 11:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:06.804 11:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:22:06.804 11:33:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:22:06.804 00:22:06.804 real 0m7.563s 00:22:06.804 user 0m11.517s 00:22:06.804 sys 0m0.872s 00:22:06.804 11:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:06.804 11:33:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.804 11:33:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:06.804 11:33:41 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:22:06.804 11:33:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:22:06.804 11:33:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:22:06.804 11:33:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:06.804 11:33:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.804 11:33:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.804 ************************************ 00:22:06.804 START TEST raid_state_function_test 00:22:06.804 ************************************ 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=134453 00:22:06.804 Process raid pid: 134453 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 134453' 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 134453 /var/tmp/spdk-raid.sock 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 134453 ']' 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:06.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.804 11:33:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:06.804 [2024-07-13 11:33:41.425713] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:06.804 [2024-07-13 11:33:41.425924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.062 [2024-07-13 11:33:41.587876] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.062 [2024-07-13 11:33:41.789863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.320 [2024-07-13 11:33:41.982099] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.578 11:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.578 11:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:22:07.578 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:07.836 [2024-07-13 11:33:42.529187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:07.836 [2024-07-13 11:33:42.529286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:07.836 [2024-07-13 11:33:42.529301] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:07.836 [2024-07-13 11:33:42.529325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:07.836 [2024-07-13 11:33:42.529334] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:07.836 [2024-07-13 11:33:42.529349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:07.836 [2024-07-13 11:33:42.529356] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:07.836 [2024-07-13 11:33:42.529377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.836 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.093 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:08.093 "name": "Existed_Raid", 00:22:08.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.093 "strip_size_kb": 64, 00:22:08.093 "state": "configuring", 00:22:08.093 "raid_level": "raid0", 00:22:08.093 "superblock": false, 00:22:08.093 "num_base_bdevs": 4, 00:22:08.093 "num_base_bdevs_discovered": 0, 00:22:08.093 "num_base_bdevs_operational": 4, 00:22:08.093 "base_bdevs_list": [ 00:22:08.093 { 00:22:08.093 "name": "BaseBdev1", 00:22:08.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.093 "is_configured": false, 00:22:08.093 "data_offset": 0, 00:22:08.093 "data_size": 0 00:22:08.093 }, 00:22:08.093 { 00:22:08.094 "name": "BaseBdev2", 00:22:08.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.094 "is_configured": false, 00:22:08.094 "data_offset": 0, 00:22:08.094 "data_size": 0 00:22:08.094 }, 00:22:08.094 { 00:22:08.094 "name": "BaseBdev3", 00:22:08.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.094 "is_configured": false, 00:22:08.094 "data_offset": 0, 00:22:08.094 "data_size": 0 00:22:08.094 }, 00:22:08.094 { 00:22:08.094 "name": "BaseBdev4", 00:22:08.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.094 "is_configured": false, 00:22:08.094 "data_offset": 0, 00:22:08.094 "data_size": 0 00:22:08.094 } 00:22:08.094 ] 00:22:08.094 }' 00:22:08.094 11:33:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:08.094 11:33:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.660 11:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:08.918 [2024-07-13 11:33:43.621262] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:08.918 [2024-07-13 11:33:43.621292] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:08.918 11:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:09.176 [2024-07-13 11:33:43.813307] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:09.176 [2024-07-13 11:33:43.813361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:09.176 [2024-07-13 11:33:43.813372] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:09.176 [2024-07-13 11:33:43.813414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:09.176 [2024-07-13 11:33:43.813423] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:09.176 [2024-07-13 11:33:43.813456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:09.176 [2024-07-13 11:33:43.813464] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:09.176 [2024-07-13 11:33:43.813483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:09.176 11:33:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:09.434 [2024-07-13 11:33:44.090220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:09.434 BaseBdev1 00:22:09.434 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:09.434 11:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:09.434 11:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:09.434 11:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:09.434 11:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:09.434 11:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:09.434 11:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:09.693 11:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:09.952 [ 00:22:09.952 { 00:22:09.952 "name": "BaseBdev1", 00:22:09.952 "aliases": [ 00:22:09.952 "b75a9552-bc3f-4468-8113-c13abeb6ae18" 00:22:09.952 ], 00:22:09.952 "product_name": "Malloc disk", 00:22:09.952 "block_size": 512, 00:22:09.952 "num_blocks": 65536, 00:22:09.952 "uuid": "b75a9552-bc3f-4468-8113-c13abeb6ae18", 00:22:09.952 "assigned_rate_limits": { 00:22:09.952 "rw_ios_per_sec": 0, 00:22:09.952 "rw_mbytes_per_sec": 0, 00:22:09.952 "r_mbytes_per_sec": 0, 00:22:09.952 "w_mbytes_per_sec": 0 00:22:09.952 }, 00:22:09.952 "claimed": true, 00:22:09.952 "claim_type": "exclusive_write", 00:22:09.952 "zoned": false, 00:22:09.952 "supported_io_types": { 00:22:09.952 "read": true, 00:22:09.952 "write": true, 00:22:09.952 "unmap": true, 00:22:09.952 "flush": true, 00:22:09.952 "reset": true, 00:22:09.952 "nvme_admin": false, 00:22:09.952 "nvme_io": false, 00:22:09.952 "nvme_io_md": false, 00:22:09.952 "write_zeroes": true, 00:22:09.952 "zcopy": true, 00:22:09.952 "get_zone_info": false, 00:22:09.952 "zone_management": false, 00:22:09.952 "zone_append": false, 00:22:09.952 "compare": false, 00:22:09.952 "compare_and_write": false, 00:22:09.952 "abort": true, 00:22:09.952 "seek_hole": false, 00:22:09.952 "seek_data": false, 00:22:09.952 "copy": true, 00:22:09.952 "nvme_iov_md": false 00:22:09.952 }, 00:22:09.952 "memory_domains": [ 00:22:09.952 { 00:22:09.952 "dma_device_id": "system", 00:22:09.952 "dma_device_type": 1 00:22:09.952 }, 00:22:09.952 { 00:22:09.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.952 "dma_device_type": 2 00:22:09.952 } 00:22:09.952 ], 00:22:09.952 "driver_specific": {} 00:22:09.952 } 00:22:09.952 ] 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.952 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.211 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.211 "name": "Existed_Raid", 00:22:10.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.211 "strip_size_kb": 64, 00:22:10.211 "state": "configuring", 00:22:10.211 "raid_level": "raid0", 00:22:10.211 "superblock": false, 00:22:10.211 "num_base_bdevs": 4, 00:22:10.211 "num_base_bdevs_discovered": 1, 00:22:10.211 "num_base_bdevs_operational": 4, 00:22:10.211 "base_bdevs_list": [ 00:22:10.211 { 00:22:10.211 "name": "BaseBdev1", 00:22:10.211 "uuid": "b75a9552-bc3f-4468-8113-c13abeb6ae18", 00:22:10.211 "is_configured": true, 00:22:10.211 "data_offset": 0, 00:22:10.211 "data_size": 65536 00:22:10.211 }, 00:22:10.211 { 00:22:10.211 "name": "BaseBdev2", 00:22:10.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.211 "is_configured": false, 00:22:10.211 "data_offset": 0, 00:22:10.211 "data_size": 0 00:22:10.211 }, 00:22:10.211 { 00:22:10.211 "name": "BaseBdev3", 00:22:10.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.211 "is_configured": false, 00:22:10.211 "data_offset": 0, 00:22:10.211 "data_size": 0 00:22:10.211 }, 00:22:10.212 { 00:22:10.212 "name": "BaseBdev4", 00:22:10.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.212 "is_configured": false, 00:22:10.212 "data_offset": 0, 00:22:10.212 "data_size": 0 00:22:10.212 } 00:22:10.212 ] 00:22:10.212 }' 00:22:10.212 11:33:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.212 11:33:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.779 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:11.037 [2024-07-13 11:33:45.658495] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:11.037 [2024-07-13 11:33:45.658536] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:22:11.037 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:11.296 [2024-07-13 11:33:45.834570] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:11.296 [2024-07-13 11:33:45.836461] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:11.296 [2024-07-13 11:33:45.836518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:11.296 [2024-07-13 11:33:45.836530] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:11.296 [2024-07-13 11:33:45.836556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:11.296 [2024-07-13 11:33:45.836565] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:11.296 [2024-07-13 11:33:45.836593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.296 11:33:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:11.555 11:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:11.555 "name": "Existed_Raid", 00:22:11.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.555 "strip_size_kb": 64, 00:22:11.555 "state": "configuring", 00:22:11.555 "raid_level": "raid0", 00:22:11.555 "superblock": false, 00:22:11.555 "num_base_bdevs": 4, 00:22:11.555 "num_base_bdevs_discovered": 1, 00:22:11.555 "num_base_bdevs_operational": 4, 00:22:11.555 "base_bdevs_list": [ 00:22:11.555 { 00:22:11.555 "name": "BaseBdev1", 00:22:11.555 "uuid": "b75a9552-bc3f-4468-8113-c13abeb6ae18", 00:22:11.555 "is_configured": true, 00:22:11.555 "data_offset": 0, 00:22:11.555 "data_size": 65536 00:22:11.555 }, 00:22:11.555 { 00:22:11.555 "name": "BaseBdev2", 00:22:11.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.555 "is_configured": false, 00:22:11.555 "data_offset": 0, 00:22:11.555 "data_size": 0 00:22:11.555 }, 00:22:11.555 { 00:22:11.555 "name": "BaseBdev3", 00:22:11.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.555 "is_configured": false, 00:22:11.555 "data_offset": 0, 00:22:11.555 "data_size": 0 00:22:11.555 }, 00:22:11.555 { 00:22:11.555 "name": "BaseBdev4", 00:22:11.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.555 "is_configured": false, 00:22:11.555 "data_offset": 0, 00:22:11.555 "data_size": 0 00:22:11.555 } 00:22:11.555 ] 00:22:11.555 }' 00:22:11.555 11:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:11.555 11:33:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.132 11:33:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:12.390 [2024-07-13 11:33:47.033415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:12.390 BaseBdev2 00:22:12.390 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:12.390 11:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:12.390 11:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:12.390 11:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:12.390 11:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:12.390 11:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:12.390 11:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:12.649 11:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:12.907 [ 00:22:12.907 { 00:22:12.907 "name": "BaseBdev2", 00:22:12.907 "aliases": [ 00:22:12.907 "cc220c66-a074-4770-8eec-c0df97dd1a21" 00:22:12.907 ], 00:22:12.907 "product_name": "Malloc disk", 00:22:12.907 "block_size": 512, 00:22:12.907 "num_blocks": 65536, 00:22:12.907 "uuid": "cc220c66-a074-4770-8eec-c0df97dd1a21", 00:22:12.907 "assigned_rate_limits": { 00:22:12.907 "rw_ios_per_sec": 0, 00:22:12.907 "rw_mbytes_per_sec": 0, 00:22:12.907 "r_mbytes_per_sec": 0, 00:22:12.907 "w_mbytes_per_sec": 0 00:22:12.907 }, 00:22:12.907 "claimed": true, 00:22:12.907 "claim_type": "exclusive_write", 00:22:12.907 "zoned": false, 00:22:12.907 "supported_io_types": { 00:22:12.907 "read": true, 00:22:12.907 "write": true, 00:22:12.907 "unmap": true, 00:22:12.907 "flush": true, 00:22:12.907 "reset": true, 00:22:12.907 "nvme_admin": false, 00:22:12.907 "nvme_io": false, 00:22:12.907 "nvme_io_md": false, 00:22:12.907 "write_zeroes": true, 00:22:12.907 "zcopy": true, 00:22:12.907 "get_zone_info": false, 00:22:12.907 "zone_management": false, 00:22:12.907 "zone_append": false, 00:22:12.907 "compare": false, 00:22:12.907 "compare_and_write": false, 00:22:12.907 "abort": true, 00:22:12.907 "seek_hole": false, 00:22:12.907 "seek_data": false, 00:22:12.907 "copy": true, 00:22:12.907 "nvme_iov_md": false 00:22:12.907 }, 00:22:12.907 "memory_domains": [ 00:22:12.907 { 00:22:12.907 "dma_device_id": "system", 00:22:12.907 "dma_device_type": 1 00:22:12.907 }, 00:22:12.907 { 00:22:12.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.907 "dma_device_type": 2 00:22:12.907 } 00:22:12.907 ], 00:22:12.907 "driver_specific": {} 00:22:12.907 } 00:22:12.907 ] 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.907 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.166 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.166 "name": "Existed_Raid", 00:22:13.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.166 "strip_size_kb": 64, 00:22:13.166 "state": "configuring", 00:22:13.166 "raid_level": "raid0", 00:22:13.166 "superblock": false, 00:22:13.166 "num_base_bdevs": 4, 00:22:13.166 "num_base_bdevs_discovered": 2, 00:22:13.166 "num_base_bdevs_operational": 4, 00:22:13.166 "base_bdevs_list": [ 00:22:13.166 { 00:22:13.166 "name": "BaseBdev1", 00:22:13.166 "uuid": "b75a9552-bc3f-4468-8113-c13abeb6ae18", 00:22:13.166 "is_configured": true, 00:22:13.166 "data_offset": 0, 00:22:13.166 "data_size": 65536 00:22:13.166 }, 00:22:13.166 { 00:22:13.166 "name": "BaseBdev2", 00:22:13.166 "uuid": "cc220c66-a074-4770-8eec-c0df97dd1a21", 00:22:13.166 "is_configured": true, 00:22:13.166 "data_offset": 0, 00:22:13.166 "data_size": 65536 00:22:13.166 }, 00:22:13.166 { 00:22:13.166 "name": "BaseBdev3", 00:22:13.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.166 "is_configured": false, 00:22:13.166 "data_offset": 0, 00:22:13.166 "data_size": 0 00:22:13.166 }, 00:22:13.166 { 00:22:13.166 "name": "BaseBdev4", 00:22:13.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.166 "is_configured": false, 00:22:13.166 "data_offset": 0, 00:22:13.166 "data_size": 0 00:22:13.166 } 00:22:13.166 ] 00:22:13.166 }' 00:22:13.166 11:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.166 11:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.731 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:13.989 [2024-07-13 11:33:48.612799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:13.989 BaseBdev3 00:22:13.989 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:13.989 11:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:13.989 11:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:13.989 11:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:13.989 11:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:13.989 11:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:13.989 11:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:14.247 11:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:14.247 [ 00:22:14.247 { 00:22:14.247 "name": "BaseBdev3", 00:22:14.247 "aliases": [ 00:22:14.247 "e4c5747f-ff2b-456c-9357-b350212a78d4" 00:22:14.247 ], 00:22:14.247 "product_name": "Malloc disk", 00:22:14.247 "block_size": 512, 00:22:14.247 "num_blocks": 65536, 00:22:14.247 "uuid": "e4c5747f-ff2b-456c-9357-b350212a78d4", 00:22:14.247 "assigned_rate_limits": { 00:22:14.247 "rw_ios_per_sec": 0, 00:22:14.247 "rw_mbytes_per_sec": 0, 00:22:14.247 "r_mbytes_per_sec": 0, 00:22:14.247 "w_mbytes_per_sec": 0 00:22:14.247 }, 00:22:14.247 "claimed": true, 00:22:14.247 "claim_type": "exclusive_write", 00:22:14.247 "zoned": false, 00:22:14.247 "supported_io_types": { 00:22:14.247 "read": true, 00:22:14.247 "write": true, 00:22:14.247 "unmap": true, 00:22:14.504 "flush": true, 00:22:14.504 "reset": true, 00:22:14.504 "nvme_admin": false, 00:22:14.504 "nvme_io": false, 00:22:14.504 "nvme_io_md": false, 00:22:14.504 "write_zeroes": true, 00:22:14.504 "zcopy": true, 00:22:14.504 "get_zone_info": false, 00:22:14.504 "zone_management": false, 00:22:14.504 "zone_append": false, 00:22:14.504 "compare": false, 00:22:14.504 "compare_and_write": false, 00:22:14.504 "abort": true, 00:22:14.504 "seek_hole": false, 00:22:14.504 "seek_data": false, 00:22:14.504 "copy": true, 00:22:14.504 "nvme_iov_md": false 00:22:14.504 }, 00:22:14.504 "memory_domains": [ 00:22:14.504 { 00:22:14.504 "dma_device_id": "system", 00:22:14.504 "dma_device_type": 1 00:22:14.504 }, 00:22:14.504 { 00:22:14.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.504 "dma_device_type": 2 00:22:14.504 } 00:22:14.504 ], 00:22:14.504 "driver_specific": {} 00:22:14.504 } 00:22:14.504 ] 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.504 11:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.504 11:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.504 11:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.505 11:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.505 "name": "Existed_Raid", 00:22:14.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.505 "strip_size_kb": 64, 00:22:14.505 "state": "configuring", 00:22:14.505 "raid_level": "raid0", 00:22:14.505 "superblock": false, 00:22:14.505 "num_base_bdevs": 4, 00:22:14.505 "num_base_bdevs_discovered": 3, 00:22:14.505 "num_base_bdevs_operational": 4, 00:22:14.505 "base_bdevs_list": [ 00:22:14.505 { 00:22:14.505 "name": "BaseBdev1", 00:22:14.505 "uuid": "b75a9552-bc3f-4468-8113-c13abeb6ae18", 00:22:14.505 "is_configured": true, 00:22:14.505 "data_offset": 0, 00:22:14.505 "data_size": 65536 00:22:14.505 }, 00:22:14.505 { 00:22:14.505 "name": "BaseBdev2", 00:22:14.505 "uuid": "cc220c66-a074-4770-8eec-c0df97dd1a21", 00:22:14.505 "is_configured": true, 00:22:14.505 "data_offset": 0, 00:22:14.505 "data_size": 65536 00:22:14.505 }, 00:22:14.505 { 00:22:14.505 "name": "BaseBdev3", 00:22:14.505 "uuid": "e4c5747f-ff2b-456c-9357-b350212a78d4", 00:22:14.505 "is_configured": true, 00:22:14.505 "data_offset": 0, 00:22:14.505 "data_size": 65536 00:22:14.505 }, 00:22:14.505 { 00:22:14.505 "name": "BaseBdev4", 00:22:14.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.505 "is_configured": false, 00:22:14.505 "data_offset": 0, 00:22:14.505 "data_size": 0 00:22:14.505 } 00:22:14.505 ] 00:22:14.505 }' 00:22:14.505 11:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.505 11:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.439 11:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:15.698 [2024-07-13 11:33:50.224230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:15.698 [2024-07-13 11:33:50.224282] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:22:15.698 [2024-07-13 11:33:50.224292] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:15.698 [2024-07-13 11:33:50.224432] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:15.698 [2024-07-13 11:33:50.224746] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:22:15.698 [2024-07-13 11:33:50.224767] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:22:15.698 [2024-07-13 11:33:50.225014] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.698 BaseBdev4 00:22:15.698 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:15.698 11:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:15.698 11:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:15.698 11:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:15.698 11:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:15.698 11:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:15.698 11:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:15.957 [ 00:22:15.957 { 00:22:15.957 "name": "BaseBdev4", 00:22:15.957 "aliases": [ 00:22:15.957 "d3cf01a4-f1d4-4533-949d-9ab7bbd762f0" 00:22:15.957 ], 00:22:15.957 "product_name": "Malloc disk", 00:22:15.957 "block_size": 512, 00:22:15.957 "num_blocks": 65536, 00:22:15.957 "uuid": "d3cf01a4-f1d4-4533-949d-9ab7bbd762f0", 00:22:15.957 "assigned_rate_limits": { 00:22:15.957 "rw_ios_per_sec": 0, 00:22:15.957 "rw_mbytes_per_sec": 0, 00:22:15.957 "r_mbytes_per_sec": 0, 00:22:15.957 "w_mbytes_per_sec": 0 00:22:15.957 }, 00:22:15.957 "claimed": true, 00:22:15.957 "claim_type": "exclusive_write", 00:22:15.957 "zoned": false, 00:22:15.957 "supported_io_types": { 00:22:15.957 "read": true, 00:22:15.957 "write": true, 00:22:15.957 "unmap": true, 00:22:15.957 "flush": true, 00:22:15.957 "reset": true, 00:22:15.957 "nvme_admin": false, 00:22:15.957 "nvme_io": false, 00:22:15.957 "nvme_io_md": false, 00:22:15.957 "write_zeroes": true, 00:22:15.957 "zcopy": true, 00:22:15.957 "get_zone_info": false, 00:22:15.957 "zone_management": false, 00:22:15.957 "zone_append": false, 00:22:15.957 "compare": false, 00:22:15.957 "compare_and_write": false, 00:22:15.957 "abort": true, 00:22:15.957 "seek_hole": false, 00:22:15.957 "seek_data": false, 00:22:15.957 "copy": true, 00:22:15.957 "nvme_iov_md": false 00:22:15.957 }, 00:22:15.957 "memory_domains": [ 00:22:15.957 { 00:22:15.957 "dma_device_id": "system", 00:22:15.957 "dma_device_type": 1 00:22:15.957 }, 00:22:15.957 { 00:22:15.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.957 "dma_device_type": 2 00:22:15.957 } 00:22:15.957 ], 00:22:15.957 "driver_specific": {} 00:22:15.957 } 00:22:15.957 ] 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.957 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.216 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.216 "name": "Existed_Raid", 00:22:16.216 "uuid": "bf178639-0c5b-4c90-ae15-648caa4191b9", 00:22:16.216 "strip_size_kb": 64, 00:22:16.216 "state": "online", 00:22:16.216 "raid_level": "raid0", 00:22:16.216 "superblock": false, 00:22:16.216 "num_base_bdevs": 4, 00:22:16.216 "num_base_bdevs_discovered": 4, 00:22:16.216 "num_base_bdevs_operational": 4, 00:22:16.216 "base_bdevs_list": [ 00:22:16.216 { 00:22:16.216 "name": "BaseBdev1", 00:22:16.216 "uuid": "b75a9552-bc3f-4468-8113-c13abeb6ae18", 00:22:16.216 "is_configured": true, 00:22:16.216 "data_offset": 0, 00:22:16.216 "data_size": 65536 00:22:16.216 }, 00:22:16.216 { 00:22:16.216 "name": "BaseBdev2", 00:22:16.216 "uuid": "cc220c66-a074-4770-8eec-c0df97dd1a21", 00:22:16.216 "is_configured": true, 00:22:16.216 "data_offset": 0, 00:22:16.216 "data_size": 65536 00:22:16.216 }, 00:22:16.216 { 00:22:16.216 "name": "BaseBdev3", 00:22:16.216 "uuid": "e4c5747f-ff2b-456c-9357-b350212a78d4", 00:22:16.216 "is_configured": true, 00:22:16.216 "data_offset": 0, 00:22:16.216 "data_size": 65536 00:22:16.216 }, 00:22:16.216 { 00:22:16.216 "name": "BaseBdev4", 00:22:16.216 "uuid": "d3cf01a4-f1d4-4533-949d-9ab7bbd762f0", 00:22:16.216 "is_configured": true, 00:22:16.216 "data_offset": 0, 00:22:16.216 "data_size": 65536 00:22:16.216 } 00:22:16.216 ] 00:22:16.216 }' 00:22:16.216 11:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.216 11:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:17.153 [2024-07-13 11:33:51.872757] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:17.153 "name": "Existed_Raid", 00:22:17.153 "aliases": [ 00:22:17.153 "bf178639-0c5b-4c90-ae15-648caa4191b9" 00:22:17.153 ], 00:22:17.153 "product_name": "Raid Volume", 00:22:17.153 "block_size": 512, 00:22:17.153 "num_blocks": 262144, 00:22:17.153 "uuid": "bf178639-0c5b-4c90-ae15-648caa4191b9", 00:22:17.153 "assigned_rate_limits": { 00:22:17.153 "rw_ios_per_sec": 0, 00:22:17.153 "rw_mbytes_per_sec": 0, 00:22:17.153 "r_mbytes_per_sec": 0, 00:22:17.153 "w_mbytes_per_sec": 0 00:22:17.153 }, 00:22:17.153 "claimed": false, 00:22:17.153 "zoned": false, 00:22:17.153 "supported_io_types": { 00:22:17.153 "read": true, 00:22:17.153 "write": true, 00:22:17.153 "unmap": true, 00:22:17.153 "flush": true, 00:22:17.153 "reset": true, 00:22:17.153 "nvme_admin": false, 00:22:17.153 "nvme_io": false, 00:22:17.153 "nvme_io_md": false, 00:22:17.153 "write_zeroes": true, 00:22:17.153 "zcopy": false, 00:22:17.153 "get_zone_info": false, 00:22:17.153 "zone_management": false, 00:22:17.153 "zone_append": false, 00:22:17.153 "compare": false, 00:22:17.153 "compare_and_write": false, 00:22:17.153 "abort": false, 00:22:17.153 "seek_hole": false, 00:22:17.153 "seek_data": false, 00:22:17.153 "copy": false, 00:22:17.153 "nvme_iov_md": false 00:22:17.153 }, 00:22:17.153 "memory_domains": [ 00:22:17.153 { 00:22:17.153 "dma_device_id": "system", 00:22:17.153 "dma_device_type": 1 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.153 "dma_device_type": 2 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "dma_device_id": "system", 00:22:17.153 "dma_device_type": 1 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.153 "dma_device_type": 2 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "dma_device_id": "system", 00:22:17.153 "dma_device_type": 1 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.153 "dma_device_type": 2 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "dma_device_id": "system", 00:22:17.153 "dma_device_type": 1 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.153 "dma_device_type": 2 00:22:17.153 } 00:22:17.153 ], 00:22:17.153 "driver_specific": { 00:22:17.153 "raid": { 00:22:17.153 "uuid": "bf178639-0c5b-4c90-ae15-648caa4191b9", 00:22:17.153 "strip_size_kb": 64, 00:22:17.153 "state": "online", 00:22:17.153 "raid_level": "raid0", 00:22:17.153 "superblock": false, 00:22:17.153 "num_base_bdevs": 4, 00:22:17.153 "num_base_bdevs_discovered": 4, 00:22:17.153 "num_base_bdevs_operational": 4, 00:22:17.153 "base_bdevs_list": [ 00:22:17.153 { 00:22:17.153 "name": "BaseBdev1", 00:22:17.153 "uuid": "b75a9552-bc3f-4468-8113-c13abeb6ae18", 00:22:17.153 "is_configured": true, 00:22:17.153 "data_offset": 0, 00:22:17.153 "data_size": 65536 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "name": "BaseBdev2", 00:22:17.153 "uuid": "cc220c66-a074-4770-8eec-c0df97dd1a21", 00:22:17.153 "is_configured": true, 00:22:17.153 "data_offset": 0, 00:22:17.153 "data_size": 65536 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "name": "BaseBdev3", 00:22:17.153 "uuid": "e4c5747f-ff2b-456c-9357-b350212a78d4", 00:22:17.153 "is_configured": true, 00:22:17.153 "data_offset": 0, 00:22:17.153 "data_size": 65536 00:22:17.153 }, 00:22:17.153 { 00:22:17.153 "name": "BaseBdev4", 00:22:17.153 "uuid": "d3cf01a4-f1d4-4533-949d-9ab7bbd762f0", 00:22:17.153 "is_configured": true, 00:22:17.153 "data_offset": 0, 00:22:17.153 "data_size": 65536 00:22:17.153 } 00:22:17.153 ] 00:22:17.153 } 00:22:17.153 } 00:22:17.153 }' 00:22:17.153 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:17.412 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:17.412 BaseBdev2 00:22:17.412 BaseBdev3 00:22:17.412 BaseBdev4' 00:22:17.412 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:17.412 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:17.412 11:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:17.670 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:17.670 "name": "BaseBdev1", 00:22:17.670 "aliases": [ 00:22:17.670 "b75a9552-bc3f-4468-8113-c13abeb6ae18" 00:22:17.670 ], 00:22:17.670 "product_name": "Malloc disk", 00:22:17.670 "block_size": 512, 00:22:17.670 "num_blocks": 65536, 00:22:17.670 "uuid": "b75a9552-bc3f-4468-8113-c13abeb6ae18", 00:22:17.670 "assigned_rate_limits": { 00:22:17.670 "rw_ios_per_sec": 0, 00:22:17.670 "rw_mbytes_per_sec": 0, 00:22:17.670 "r_mbytes_per_sec": 0, 00:22:17.670 "w_mbytes_per_sec": 0 00:22:17.670 }, 00:22:17.670 "claimed": true, 00:22:17.670 "claim_type": "exclusive_write", 00:22:17.670 "zoned": false, 00:22:17.670 "supported_io_types": { 00:22:17.670 "read": true, 00:22:17.670 "write": true, 00:22:17.670 "unmap": true, 00:22:17.670 "flush": true, 00:22:17.670 "reset": true, 00:22:17.670 "nvme_admin": false, 00:22:17.670 "nvme_io": false, 00:22:17.670 "nvme_io_md": false, 00:22:17.670 "write_zeroes": true, 00:22:17.670 "zcopy": true, 00:22:17.670 "get_zone_info": false, 00:22:17.670 "zone_management": false, 00:22:17.670 "zone_append": false, 00:22:17.670 "compare": false, 00:22:17.670 "compare_and_write": false, 00:22:17.670 "abort": true, 00:22:17.670 "seek_hole": false, 00:22:17.670 "seek_data": false, 00:22:17.670 "copy": true, 00:22:17.670 "nvme_iov_md": false 00:22:17.670 }, 00:22:17.670 "memory_domains": [ 00:22:17.670 { 00:22:17.670 "dma_device_id": "system", 00:22:17.670 "dma_device_type": 1 00:22:17.670 }, 00:22:17.670 { 00:22:17.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.670 "dma_device_type": 2 00:22:17.670 } 00:22:17.670 ], 00:22:17.670 "driver_specific": {} 00:22:17.670 }' 00:22:17.670 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.670 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.670 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:17.670 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.670 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:17.929 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:18.188 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:18.188 "name": "BaseBdev2", 00:22:18.188 "aliases": [ 00:22:18.188 "cc220c66-a074-4770-8eec-c0df97dd1a21" 00:22:18.188 ], 00:22:18.188 "product_name": "Malloc disk", 00:22:18.188 "block_size": 512, 00:22:18.188 "num_blocks": 65536, 00:22:18.188 "uuid": "cc220c66-a074-4770-8eec-c0df97dd1a21", 00:22:18.188 "assigned_rate_limits": { 00:22:18.188 "rw_ios_per_sec": 0, 00:22:18.188 "rw_mbytes_per_sec": 0, 00:22:18.188 "r_mbytes_per_sec": 0, 00:22:18.188 "w_mbytes_per_sec": 0 00:22:18.188 }, 00:22:18.188 "claimed": true, 00:22:18.188 "claim_type": "exclusive_write", 00:22:18.188 "zoned": false, 00:22:18.188 "supported_io_types": { 00:22:18.188 "read": true, 00:22:18.188 "write": true, 00:22:18.188 "unmap": true, 00:22:18.188 "flush": true, 00:22:18.188 "reset": true, 00:22:18.188 "nvme_admin": false, 00:22:18.188 "nvme_io": false, 00:22:18.188 "nvme_io_md": false, 00:22:18.188 "write_zeroes": true, 00:22:18.188 "zcopy": true, 00:22:18.188 "get_zone_info": false, 00:22:18.188 "zone_management": false, 00:22:18.188 "zone_append": false, 00:22:18.188 "compare": false, 00:22:18.188 "compare_and_write": false, 00:22:18.188 "abort": true, 00:22:18.188 "seek_hole": false, 00:22:18.188 "seek_data": false, 00:22:18.188 "copy": true, 00:22:18.188 "nvme_iov_md": false 00:22:18.188 }, 00:22:18.188 "memory_domains": [ 00:22:18.188 { 00:22:18.188 "dma_device_id": "system", 00:22:18.188 "dma_device_type": 1 00:22:18.188 }, 00:22:18.188 { 00:22:18.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.188 "dma_device_type": 2 00:22:18.188 } 00:22:18.188 ], 00:22:18.188 "driver_specific": {} 00:22:18.188 }' 00:22:18.188 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.446 11:33:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.446 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:18.446 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.446 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:18.446 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:18.446 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.446 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:18.704 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:18.704 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.704 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:18.704 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:18.705 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:18.705 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:18.705 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:18.962 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:18.962 "name": "BaseBdev3", 00:22:18.962 "aliases": [ 00:22:18.962 "e4c5747f-ff2b-456c-9357-b350212a78d4" 00:22:18.962 ], 00:22:18.962 "product_name": "Malloc disk", 00:22:18.962 "block_size": 512, 00:22:18.962 "num_blocks": 65536, 00:22:18.962 "uuid": "e4c5747f-ff2b-456c-9357-b350212a78d4", 00:22:18.962 "assigned_rate_limits": { 00:22:18.962 "rw_ios_per_sec": 0, 00:22:18.962 "rw_mbytes_per_sec": 0, 00:22:18.962 "r_mbytes_per_sec": 0, 00:22:18.962 "w_mbytes_per_sec": 0 00:22:18.962 }, 00:22:18.962 "claimed": true, 00:22:18.962 "claim_type": "exclusive_write", 00:22:18.962 "zoned": false, 00:22:18.962 "supported_io_types": { 00:22:18.962 "read": true, 00:22:18.962 "write": true, 00:22:18.962 "unmap": true, 00:22:18.962 "flush": true, 00:22:18.962 "reset": true, 00:22:18.962 "nvme_admin": false, 00:22:18.962 "nvme_io": false, 00:22:18.962 "nvme_io_md": false, 00:22:18.962 "write_zeroes": true, 00:22:18.962 "zcopy": true, 00:22:18.962 "get_zone_info": false, 00:22:18.962 "zone_management": false, 00:22:18.962 "zone_append": false, 00:22:18.962 "compare": false, 00:22:18.962 "compare_and_write": false, 00:22:18.962 "abort": true, 00:22:18.962 "seek_hole": false, 00:22:18.962 "seek_data": false, 00:22:18.962 "copy": true, 00:22:18.962 "nvme_iov_md": false 00:22:18.962 }, 00:22:18.962 "memory_domains": [ 00:22:18.962 { 00:22:18.962 "dma_device_id": "system", 00:22:18.962 "dma_device_type": 1 00:22:18.962 }, 00:22:18.962 { 00:22:18.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:18.962 "dma_device_type": 2 00:22:18.962 } 00:22:18.962 ], 00:22:18.962 "driver_specific": {} 00:22:18.962 }' 00:22:18.962 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:18.962 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.220 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.220 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.220 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.220 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.220 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.220 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.477 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.477 11:33:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.477 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.477 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:19.477 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:19.477 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:19.477 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:19.735 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:19.735 "name": "BaseBdev4", 00:22:19.735 "aliases": [ 00:22:19.735 "d3cf01a4-f1d4-4533-949d-9ab7bbd762f0" 00:22:19.735 ], 00:22:19.735 "product_name": "Malloc disk", 00:22:19.735 "block_size": 512, 00:22:19.735 "num_blocks": 65536, 00:22:19.735 "uuid": "d3cf01a4-f1d4-4533-949d-9ab7bbd762f0", 00:22:19.735 "assigned_rate_limits": { 00:22:19.735 "rw_ios_per_sec": 0, 00:22:19.735 "rw_mbytes_per_sec": 0, 00:22:19.735 "r_mbytes_per_sec": 0, 00:22:19.735 "w_mbytes_per_sec": 0 00:22:19.735 }, 00:22:19.735 "claimed": true, 00:22:19.735 "claim_type": "exclusive_write", 00:22:19.735 "zoned": false, 00:22:19.735 "supported_io_types": { 00:22:19.735 "read": true, 00:22:19.735 "write": true, 00:22:19.735 "unmap": true, 00:22:19.735 "flush": true, 00:22:19.735 "reset": true, 00:22:19.735 "nvme_admin": false, 00:22:19.735 "nvme_io": false, 00:22:19.735 "nvme_io_md": false, 00:22:19.735 "write_zeroes": true, 00:22:19.735 "zcopy": true, 00:22:19.735 "get_zone_info": false, 00:22:19.735 "zone_management": false, 00:22:19.735 "zone_append": false, 00:22:19.735 "compare": false, 00:22:19.735 "compare_and_write": false, 00:22:19.735 "abort": true, 00:22:19.735 "seek_hole": false, 00:22:19.735 "seek_data": false, 00:22:19.735 "copy": true, 00:22:19.735 "nvme_iov_md": false 00:22:19.735 }, 00:22:19.735 "memory_domains": [ 00:22:19.735 { 00:22:19.736 "dma_device_id": "system", 00:22:19.736 "dma_device_type": 1 00:22:19.736 }, 00:22:19.736 { 00:22:19.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.736 "dma_device_type": 2 00:22:19.736 } 00:22:19.736 ], 00:22:19.736 "driver_specific": {} 00:22:19.736 }' 00:22:19.736 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.736 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:19.736 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:19.736 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.994 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:19.994 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:19.994 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.994 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:19.994 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:19.994 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:19.994 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.251 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.252 11:33:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:20.510 [2024-07-13 11:33:55.041127] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:20.510 [2024-07-13 11:33:55.041153] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:20.510 [2024-07-13 11:33:55.041210] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:20.510 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.768 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.768 "name": "Existed_Raid", 00:22:20.768 "uuid": "bf178639-0c5b-4c90-ae15-648caa4191b9", 00:22:20.768 "strip_size_kb": 64, 00:22:20.768 "state": "offline", 00:22:20.768 "raid_level": "raid0", 00:22:20.768 "superblock": false, 00:22:20.768 "num_base_bdevs": 4, 00:22:20.768 "num_base_bdevs_discovered": 3, 00:22:20.768 "num_base_bdevs_operational": 3, 00:22:20.768 "base_bdevs_list": [ 00:22:20.768 { 00:22:20.768 "name": null, 00:22:20.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:20.768 "is_configured": false, 00:22:20.768 "data_offset": 0, 00:22:20.768 "data_size": 65536 00:22:20.768 }, 00:22:20.768 { 00:22:20.768 "name": "BaseBdev2", 00:22:20.768 "uuid": "cc220c66-a074-4770-8eec-c0df97dd1a21", 00:22:20.768 "is_configured": true, 00:22:20.768 "data_offset": 0, 00:22:20.768 "data_size": 65536 00:22:20.768 }, 00:22:20.768 { 00:22:20.768 "name": "BaseBdev3", 00:22:20.768 "uuid": "e4c5747f-ff2b-456c-9357-b350212a78d4", 00:22:20.768 "is_configured": true, 00:22:20.768 "data_offset": 0, 00:22:20.768 "data_size": 65536 00:22:20.768 }, 00:22:20.768 { 00:22:20.768 "name": "BaseBdev4", 00:22:20.768 "uuid": "d3cf01a4-f1d4-4533-949d-9ab7bbd762f0", 00:22:20.768 "is_configured": true, 00:22:20.768 "data_offset": 0, 00:22:20.768 "data_size": 65536 00:22:20.768 } 00:22:20.768 ] 00:22:20.768 }' 00:22:20.768 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.768 11:33:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.334 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:21.334 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:21.334 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.334 11:33:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:21.592 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:21.592 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:21.592 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:21.851 [2024-07-13 11:33:56.487735] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:21.851 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:21.851 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:21.851 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:21.851 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.110 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:22.110 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:22.110 11:33:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:22.369 [2024-07-13 11:33:56.987146] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:22.369 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:22.369 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:22.369 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.369 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:22.627 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:22.627 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:22.627 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:22.885 [2024-07-13 11:33:57.423214] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:22.885 [2024-07-13 11:33:57.423275] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:22:22.885 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:22.885 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:22.885 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.885 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:23.143 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:23.143 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:23.143 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:23.143 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:23.143 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:23.143 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:23.402 BaseBdev2 00:22:23.402 11:33:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:23.402 11:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:23.402 11:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:23.402 11:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:23.402 11:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:23.402 11:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:23.402 11:33:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:23.402 11:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:23.661 [ 00:22:23.661 { 00:22:23.661 "name": "BaseBdev2", 00:22:23.661 "aliases": [ 00:22:23.661 "528a08fc-7429-443d-b501-9eabe052c42e" 00:22:23.661 ], 00:22:23.661 "product_name": "Malloc disk", 00:22:23.661 "block_size": 512, 00:22:23.661 "num_blocks": 65536, 00:22:23.661 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:23.661 "assigned_rate_limits": { 00:22:23.661 "rw_ios_per_sec": 0, 00:22:23.661 "rw_mbytes_per_sec": 0, 00:22:23.661 "r_mbytes_per_sec": 0, 00:22:23.661 "w_mbytes_per_sec": 0 00:22:23.661 }, 00:22:23.661 "claimed": false, 00:22:23.661 "zoned": false, 00:22:23.661 "supported_io_types": { 00:22:23.661 "read": true, 00:22:23.661 "write": true, 00:22:23.661 "unmap": true, 00:22:23.661 "flush": true, 00:22:23.661 "reset": true, 00:22:23.661 "nvme_admin": false, 00:22:23.661 "nvme_io": false, 00:22:23.661 "nvme_io_md": false, 00:22:23.661 "write_zeroes": true, 00:22:23.661 "zcopy": true, 00:22:23.661 "get_zone_info": false, 00:22:23.661 "zone_management": false, 00:22:23.661 "zone_append": false, 00:22:23.661 "compare": false, 00:22:23.661 "compare_and_write": false, 00:22:23.661 "abort": true, 00:22:23.661 "seek_hole": false, 00:22:23.661 "seek_data": false, 00:22:23.661 "copy": true, 00:22:23.661 "nvme_iov_md": false 00:22:23.661 }, 00:22:23.661 "memory_domains": [ 00:22:23.661 { 00:22:23.661 "dma_device_id": "system", 00:22:23.661 "dma_device_type": 1 00:22:23.661 }, 00:22:23.661 { 00:22:23.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:23.661 "dma_device_type": 2 00:22:23.661 } 00:22:23.661 ], 00:22:23.661 "driver_specific": {} 00:22:23.661 } 00:22:23.661 ] 00:22:23.661 11:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:23.661 11:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:23.661 11:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:23.661 11:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:23.919 BaseBdev3 00:22:23.919 11:33:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:23.919 11:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:23.919 11:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:23.919 11:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:23.919 11:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:23.919 11:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:23.919 11:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:24.178 11:33:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:24.437 [ 00:22:24.437 { 00:22:24.437 "name": "BaseBdev3", 00:22:24.437 "aliases": [ 00:22:24.437 "89479b26-a6e4-4fe2-959c-afb0ae6d87fb" 00:22:24.437 ], 00:22:24.437 "product_name": "Malloc disk", 00:22:24.437 "block_size": 512, 00:22:24.437 "num_blocks": 65536, 00:22:24.437 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:24.437 "assigned_rate_limits": { 00:22:24.437 "rw_ios_per_sec": 0, 00:22:24.437 "rw_mbytes_per_sec": 0, 00:22:24.437 "r_mbytes_per_sec": 0, 00:22:24.437 "w_mbytes_per_sec": 0 00:22:24.437 }, 00:22:24.437 "claimed": false, 00:22:24.437 "zoned": false, 00:22:24.437 "supported_io_types": { 00:22:24.437 "read": true, 00:22:24.437 "write": true, 00:22:24.437 "unmap": true, 00:22:24.437 "flush": true, 00:22:24.437 "reset": true, 00:22:24.437 "nvme_admin": false, 00:22:24.437 "nvme_io": false, 00:22:24.437 "nvme_io_md": false, 00:22:24.437 "write_zeroes": true, 00:22:24.437 "zcopy": true, 00:22:24.437 "get_zone_info": false, 00:22:24.437 "zone_management": false, 00:22:24.437 "zone_append": false, 00:22:24.437 "compare": false, 00:22:24.437 "compare_and_write": false, 00:22:24.437 "abort": true, 00:22:24.437 "seek_hole": false, 00:22:24.437 "seek_data": false, 00:22:24.437 "copy": true, 00:22:24.437 "nvme_iov_md": false 00:22:24.437 }, 00:22:24.437 "memory_domains": [ 00:22:24.437 { 00:22:24.437 "dma_device_id": "system", 00:22:24.437 "dma_device_type": 1 00:22:24.437 }, 00:22:24.437 { 00:22:24.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.437 "dma_device_type": 2 00:22:24.437 } 00:22:24.437 ], 00:22:24.437 "driver_specific": {} 00:22:24.437 } 00:22:24.437 ] 00:22:24.437 11:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:24.437 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:24.437 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:24.437 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:24.696 BaseBdev4 00:22:24.696 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:24.696 11:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:24.696 11:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:24.696 11:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:24.696 11:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:24.696 11:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:24.696 11:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:24.955 11:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:25.213 [ 00:22:25.213 { 00:22:25.213 "name": "BaseBdev4", 00:22:25.213 "aliases": [ 00:22:25.213 "63f4fa03-9dfa-4712-a98e-25e7ad308b67" 00:22:25.213 ], 00:22:25.213 "product_name": "Malloc disk", 00:22:25.213 "block_size": 512, 00:22:25.213 "num_blocks": 65536, 00:22:25.213 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:25.213 "assigned_rate_limits": { 00:22:25.213 "rw_ios_per_sec": 0, 00:22:25.213 "rw_mbytes_per_sec": 0, 00:22:25.213 "r_mbytes_per_sec": 0, 00:22:25.213 "w_mbytes_per_sec": 0 00:22:25.213 }, 00:22:25.213 "claimed": false, 00:22:25.213 "zoned": false, 00:22:25.213 "supported_io_types": { 00:22:25.213 "read": true, 00:22:25.213 "write": true, 00:22:25.213 "unmap": true, 00:22:25.213 "flush": true, 00:22:25.213 "reset": true, 00:22:25.213 "nvme_admin": false, 00:22:25.213 "nvme_io": false, 00:22:25.213 "nvme_io_md": false, 00:22:25.213 "write_zeroes": true, 00:22:25.213 "zcopy": true, 00:22:25.213 "get_zone_info": false, 00:22:25.213 "zone_management": false, 00:22:25.213 "zone_append": false, 00:22:25.213 "compare": false, 00:22:25.213 "compare_and_write": false, 00:22:25.213 "abort": true, 00:22:25.213 "seek_hole": false, 00:22:25.213 "seek_data": false, 00:22:25.213 "copy": true, 00:22:25.213 "nvme_iov_md": false 00:22:25.213 }, 00:22:25.213 "memory_domains": [ 00:22:25.213 { 00:22:25.213 "dma_device_id": "system", 00:22:25.213 "dma_device_type": 1 00:22:25.213 }, 00:22:25.213 { 00:22:25.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.213 "dma_device_type": 2 00:22:25.213 } 00:22:25.213 ], 00:22:25.213 "driver_specific": {} 00:22:25.213 } 00:22:25.213 ] 00:22:25.213 11:33:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:25.213 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:25.213 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:25.213 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:25.213 [2024-07-13 11:33:59.946795] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:25.213 [2024-07-13 11:33:59.946887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:25.213 [2024-07-13 11:33:59.946929] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:25.213 [2024-07-13 11:33:59.948738] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:25.213 [2024-07-13 11:33:59.948805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.472 11:33:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.730 11:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.730 "name": "Existed_Raid", 00:22:25.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.730 "strip_size_kb": 64, 00:22:25.730 "state": "configuring", 00:22:25.730 "raid_level": "raid0", 00:22:25.730 "superblock": false, 00:22:25.730 "num_base_bdevs": 4, 00:22:25.730 "num_base_bdevs_discovered": 3, 00:22:25.730 "num_base_bdevs_operational": 4, 00:22:25.730 "base_bdevs_list": [ 00:22:25.730 { 00:22:25.730 "name": "BaseBdev1", 00:22:25.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.730 "is_configured": false, 00:22:25.730 "data_offset": 0, 00:22:25.730 "data_size": 0 00:22:25.730 }, 00:22:25.730 { 00:22:25.730 "name": "BaseBdev2", 00:22:25.730 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:25.730 "is_configured": true, 00:22:25.730 "data_offset": 0, 00:22:25.730 "data_size": 65536 00:22:25.730 }, 00:22:25.730 { 00:22:25.730 "name": "BaseBdev3", 00:22:25.730 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:25.730 "is_configured": true, 00:22:25.730 "data_offset": 0, 00:22:25.730 "data_size": 65536 00:22:25.730 }, 00:22:25.730 { 00:22:25.730 "name": "BaseBdev4", 00:22:25.730 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:25.730 "is_configured": true, 00:22:25.730 "data_offset": 0, 00:22:25.730 "data_size": 65536 00:22:25.730 } 00:22:25.730 ] 00:22:25.730 }' 00:22:25.730 11:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.730 11:34:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:26.296 11:34:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:26.554 [2024-07-13 11:34:01.067089] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.554 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.812 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.812 "name": "Existed_Raid", 00:22:26.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.812 "strip_size_kb": 64, 00:22:26.812 "state": "configuring", 00:22:26.812 "raid_level": "raid0", 00:22:26.812 "superblock": false, 00:22:26.812 "num_base_bdevs": 4, 00:22:26.812 "num_base_bdevs_discovered": 2, 00:22:26.812 "num_base_bdevs_operational": 4, 00:22:26.812 "base_bdevs_list": [ 00:22:26.812 { 00:22:26.812 "name": "BaseBdev1", 00:22:26.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.812 "is_configured": false, 00:22:26.812 "data_offset": 0, 00:22:26.812 "data_size": 0 00:22:26.812 }, 00:22:26.812 { 00:22:26.812 "name": null, 00:22:26.812 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:26.812 "is_configured": false, 00:22:26.812 "data_offset": 0, 00:22:26.812 "data_size": 65536 00:22:26.812 }, 00:22:26.812 { 00:22:26.812 "name": "BaseBdev3", 00:22:26.812 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:26.812 "is_configured": true, 00:22:26.812 "data_offset": 0, 00:22:26.812 "data_size": 65536 00:22:26.812 }, 00:22:26.812 { 00:22:26.812 "name": "BaseBdev4", 00:22:26.812 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:26.812 "is_configured": true, 00:22:26.812 "data_offset": 0, 00:22:26.812 "data_size": 65536 00:22:26.812 } 00:22:26.812 ] 00:22:26.812 }' 00:22:26.812 11:34:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.812 11:34:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.377 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.377 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:27.635 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:27.635 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:27.893 [2024-07-13 11:34:02.541136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:27.893 BaseBdev1 00:22:27.893 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:27.893 11:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:27.893 11:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:27.893 11:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:27.893 11:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:27.893 11:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:27.893 11:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:28.151 11:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:28.411 [ 00:22:28.411 { 00:22:28.411 "name": "BaseBdev1", 00:22:28.411 "aliases": [ 00:22:28.411 "633aa040-9608-4f81-ad4d-06ce3dbaa7bb" 00:22:28.411 ], 00:22:28.411 "product_name": "Malloc disk", 00:22:28.411 "block_size": 512, 00:22:28.411 "num_blocks": 65536, 00:22:28.411 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:28.411 "assigned_rate_limits": { 00:22:28.411 "rw_ios_per_sec": 0, 00:22:28.411 "rw_mbytes_per_sec": 0, 00:22:28.411 "r_mbytes_per_sec": 0, 00:22:28.411 "w_mbytes_per_sec": 0 00:22:28.411 }, 00:22:28.411 "claimed": true, 00:22:28.411 "claim_type": "exclusive_write", 00:22:28.411 "zoned": false, 00:22:28.411 "supported_io_types": { 00:22:28.411 "read": true, 00:22:28.411 "write": true, 00:22:28.411 "unmap": true, 00:22:28.411 "flush": true, 00:22:28.411 "reset": true, 00:22:28.411 "nvme_admin": false, 00:22:28.411 "nvme_io": false, 00:22:28.411 "nvme_io_md": false, 00:22:28.411 "write_zeroes": true, 00:22:28.411 "zcopy": true, 00:22:28.411 "get_zone_info": false, 00:22:28.411 "zone_management": false, 00:22:28.411 "zone_append": false, 00:22:28.411 "compare": false, 00:22:28.411 "compare_and_write": false, 00:22:28.411 "abort": true, 00:22:28.411 "seek_hole": false, 00:22:28.411 "seek_data": false, 00:22:28.411 "copy": true, 00:22:28.411 "nvme_iov_md": false 00:22:28.411 }, 00:22:28.411 "memory_domains": [ 00:22:28.411 { 00:22:28.411 "dma_device_id": "system", 00:22:28.411 "dma_device_type": 1 00:22:28.411 }, 00:22:28.411 { 00:22:28.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.411 "dma_device_type": 2 00:22:28.411 } 00:22:28.411 ], 00:22:28.411 "driver_specific": {} 00:22:28.411 } 00:22:28.411 ] 00:22:28.411 11:34:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:28.411 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:28.411 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.411 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.411 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:28.411 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:28.411 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:28.411 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.412 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.412 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.412 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.412 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.412 11:34:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.684 11:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.684 "name": "Existed_Raid", 00:22:28.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.684 "strip_size_kb": 64, 00:22:28.684 "state": "configuring", 00:22:28.684 "raid_level": "raid0", 00:22:28.684 "superblock": false, 00:22:28.684 "num_base_bdevs": 4, 00:22:28.684 "num_base_bdevs_discovered": 3, 00:22:28.684 "num_base_bdevs_operational": 4, 00:22:28.684 "base_bdevs_list": [ 00:22:28.684 { 00:22:28.684 "name": "BaseBdev1", 00:22:28.684 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:28.684 "is_configured": true, 00:22:28.684 "data_offset": 0, 00:22:28.684 "data_size": 65536 00:22:28.684 }, 00:22:28.684 { 00:22:28.684 "name": null, 00:22:28.684 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:28.684 "is_configured": false, 00:22:28.684 "data_offset": 0, 00:22:28.684 "data_size": 65536 00:22:28.684 }, 00:22:28.684 { 00:22:28.684 "name": "BaseBdev3", 00:22:28.684 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:28.684 "is_configured": true, 00:22:28.684 "data_offset": 0, 00:22:28.684 "data_size": 65536 00:22:28.684 }, 00:22:28.684 { 00:22:28.684 "name": "BaseBdev4", 00:22:28.684 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:28.684 "is_configured": true, 00:22:28.684 "data_offset": 0, 00:22:28.684 "data_size": 65536 00:22:28.684 } 00:22:28.684 ] 00:22:28.684 }' 00:22:28.684 11:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.684 11:34:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:29.261 11:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.261 11:34:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:29.519 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:29.519 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:29.777 [2024-07-13 11:34:04.341487] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.777 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.035 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.035 "name": "Existed_Raid", 00:22:30.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.035 "strip_size_kb": 64, 00:22:30.035 "state": "configuring", 00:22:30.035 "raid_level": "raid0", 00:22:30.035 "superblock": false, 00:22:30.035 "num_base_bdevs": 4, 00:22:30.035 "num_base_bdevs_discovered": 2, 00:22:30.035 "num_base_bdevs_operational": 4, 00:22:30.035 "base_bdevs_list": [ 00:22:30.035 { 00:22:30.035 "name": "BaseBdev1", 00:22:30.035 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:30.035 "is_configured": true, 00:22:30.035 "data_offset": 0, 00:22:30.035 "data_size": 65536 00:22:30.035 }, 00:22:30.035 { 00:22:30.035 "name": null, 00:22:30.035 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:30.035 "is_configured": false, 00:22:30.035 "data_offset": 0, 00:22:30.035 "data_size": 65536 00:22:30.035 }, 00:22:30.035 { 00:22:30.035 "name": null, 00:22:30.035 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:30.035 "is_configured": false, 00:22:30.035 "data_offset": 0, 00:22:30.035 "data_size": 65536 00:22:30.035 }, 00:22:30.035 { 00:22:30.035 "name": "BaseBdev4", 00:22:30.035 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:30.035 "is_configured": true, 00:22:30.035 "data_offset": 0, 00:22:30.035 "data_size": 65536 00:22:30.035 } 00:22:30.035 ] 00:22:30.035 }' 00:22:30.035 11:34:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.035 11:34:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:30.601 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.601 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:30.859 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:30.859 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:31.117 [2024-07-13 11:34:05.701761] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.117 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:31.376 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:31.376 "name": "Existed_Raid", 00:22:31.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.376 "strip_size_kb": 64, 00:22:31.376 "state": "configuring", 00:22:31.376 "raid_level": "raid0", 00:22:31.376 "superblock": false, 00:22:31.376 "num_base_bdevs": 4, 00:22:31.376 "num_base_bdevs_discovered": 3, 00:22:31.376 "num_base_bdevs_operational": 4, 00:22:31.376 "base_bdevs_list": [ 00:22:31.376 { 00:22:31.376 "name": "BaseBdev1", 00:22:31.376 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:31.376 "is_configured": true, 00:22:31.376 "data_offset": 0, 00:22:31.376 "data_size": 65536 00:22:31.376 }, 00:22:31.376 { 00:22:31.376 "name": null, 00:22:31.376 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:31.376 "is_configured": false, 00:22:31.376 "data_offset": 0, 00:22:31.376 "data_size": 65536 00:22:31.376 }, 00:22:31.376 { 00:22:31.376 "name": "BaseBdev3", 00:22:31.376 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:31.376 "is_configured": true, 00:22:31.376 "data_offset": 0, 00:22:31.376 "data_size": 65536 00:22:31.376 }, 00:22:31.376 { 00:22:31.376 "name": "BaseBdev4", 00:22:31.376 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:31.376 "is_configured": true, 00:22:31.376 "data_offset": 0, 00:22:31.376 "data_size": 65536 00:22:31.376 } 00:22:31.376 ] 00:22:31.376 }' 00:22:31.376 11:34:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:31.376 11:34:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:31.942 11:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.942 11:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:32.200 11:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:32.200 11:34:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:32.459 [2024-07-13 11:34:07.017987] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.459 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.717 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.717 "name": "Existed_Raid", 00:22:32.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.717 "strip_size_kb": 64, 00:22:32.717 "state": "configuring", 00:22:32.717 "raid_level": "raid0", 00:22:32.717 "superblock": false, 00:22:32.717 "num_base_bdevs": 4, 00:22:32.717 "num_base_bdevs_discovered": 2, 00:22:32.717 "num_base_bdevs_operational": 4, 00:22:32.717 "base_bdevs_list": [ 00:22:32.717 { 00:22:32.717 "name": null, 00:22:32.717 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:32.717 "is_configured": false, 00:22:32.717 "data_offset": 0, 00:22:32.717 "data_size": 65536 00:22:32.717 }, 00:22:32.717 { 00:22:32.717 "name": null, 00:22:32.717 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:32.717 "is_configured": false, 00:22:32.717 "data_offset": 0, 00:22:32.717 "data_size": 65536 00:22:32.717 }, 00:22:32.717 { 00:22:32.717 "name": "BaseBdev3", 00:22:32.717 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:32.717 "is_configured": true, 00:22:32.717 "data_offset": 0, 00:22:32.717 "data_size": 65536 00:22:32.717 }, 00:22:32.717 { 00:22:32.717 "name": "BaseBdev4", 00:22:32.717 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:32.717 "is_configured": true, 00:22:32.717 "data_offset": 0, 00:22:32.717 "data_size": 65536 00:22:32.717 } 00:22:32.717 ] 00:22:32.717 }' 00:22:32.717 11:34:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.717 11:34:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:33.284 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.284 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:33.861 [2024-07-13 11:34:08.516793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.861 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.119 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:34.119 "name": "Existed_Raid", 00:22:34.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.119 "strip_size_kb": 64, 00:22:34.119 "state": "configuring", 00:22:34.119 "raid_level": "raid0", 00:22:34.119 "superblock": false, 00:22:34.119 "num_base_bdevs": 4, 00:22:34.119 "num_base_bdevs_discovered": 3, 00:22:34.119 "num_base_bdevs_operational": 4, 00:22:34.119 "base_bdevs_list": [ 00:22:34.119 { 00:22:34.119 "name": null, 00:22:34.119 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:34.119 "is_configured": false, 00:22:34.119 "data_offset": 0, 00:22:34.119 "data_size": 65536 00:22:34.119 }, 00:22:34.119 { 00:22:34.119 "name": "BaseBdev2", 00:22:34.119 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:34.119 "is_configured": true, 00:22:34.119 "data_offset": 0, 00:22:34.119 "data_size": 65536 00:22:34.119 }, 00:22:34.119 { 00:22:34.119 "name": "BaseBdev3", 00:22:34.119 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:34.119 "is_configured": true, 00:22:34.119 "data_offset": 0, 00:22:34.119 "data_size": 65536 00:22:34.119 }, 00:22:34.119 { 00:22:34.119 "name": "BaseBdev4", 00:22:34.119 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:34.119 "is_configured": true, 00:22:34.119 "data_offset": 0, 00:22:34.119 "data_size": 65536 00:22:34.119 } 00:22:34.119 ] 00:22:34.119 }' 00:22:34.119 11:34:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:34.119 11:34:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:34.685 11:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:34.685 11:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.943 11:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:34.943 11:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:34.943 11:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.200 11:34:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 633aa040-9608-4f81-ad4d-06ce3dbaa7bb 00:22:35.458 [2024-07-13 11:34:10.090002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:35.458 [2024-07-13 11:34:10.090060] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:35.458 [2024-07-13 11:34:10.090075] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:35.458 [2024-07-13 11:34:10.090185] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:35.458 [2024-07-13 11:34:10.090522] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:35.458 [2024-07-13 11:34:10.090547] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:22:35.458 [2024-07-13 11:34:10.090789] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.458 NewBaseBdev 00:22:35.458 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:35.458 11:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:35.458 11:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:35.458 11:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:35.458 11:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:35.458 11:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:35.458 11:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:35.717 11:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:35.975 [ 00:22:35.975 { 00:22:35.975 "name": "NewBaseBdev", 00:22:35.975 "aliases": [ 00:22:35.975 "633aa040-9608-4f81-ad4d-06ce3dbaa7bb" 00:22:35.975 ], 00:22:35.975 "product_name": "Malloc disk", 00:22:35.975 "block_size": 512, 00:22:35.975 "num_blocks": 65536, 00:22:35.975 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:35.975 "assigned_rate_limits": { 00:22:35.975 "rw_ios_per_sec": 0, 00:22:35.975 "rw_mbytes_per_sec": 0, 00:22:35.975 "r_mbytes_per_sec": 0, 00:22:35.975 "w_mbytes_per_sec": 0 00:22:35.975 }, 00:22:35.975 "claimed": true, 00:22:35.975 "claim_type": "exclusive_write", 00:22:35.975 "zoned": false, 00:22:35.975 "supported_io_types": { 00:22:35.975 "read": true, 00:22:35.975 "write": true, 00:22:35.975 "unmap": true, 00:22:35.975 "flush": true, 00:22:35.975 "reset": true, 00:22:35.975 "nvme_admin": false, 00:22:35.975 "nvme_io": false, 00:22:35.975 "nvme_io_md": false, 00:22:35.975 "write_zeroes": true, 00:22:35.975 "zcopy": true, 00:22:35.975 "get_zone_info": false, 00:22:35.975 "zone_management": false, 00:22:35.975 "zone_append": false, 00:22:35.975 "compare": false, 00:22:35.975 "compare_and_write": false, 00:22:35.975 "abort": true, 00:22:35.975 "seek_hole": false, 00:22:35.975 "seek_data": false, 00:22:35.975 "copy": true, 00:22:35.975 "nvme_iov_md": false 00:22:35.975 }, 00:22:35.975 "memory_domains": [ 00:22:35.975 { 00:22:35.975 "dma_device_id": "system", 00:22:35.975 "dma_device_type": 1 00:22:35.975 }, 00:22:35.975 { 00:22:35.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.975 "dma_device_type": 2 00:22:35.975 } 00:22:35.975 ], 00:22:35.975 "driver_specific": {} 00:22:35.975 } 00:22:35.975 ] 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.976 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:36.234 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:36.234 "name": "Existed_Raid", 00:22:36.234 "uuid": "36794898-52b8-4765-93d1-ec78e0268bfe", 00:22:36.234 "strip_size_kb": 64, 00:22:36.234 "state": "online", 00:22:36.234 "raid_level": "raid0", 00:22:36.234 "superblock": false, 00:22:36.234 "num_base_bdevs": 4, 00:22:36.234 "num_base_bdevs_discovered": 4, 00:22:36.234 "num_base_bdevs_operational": 4, 00:22:36.234 "base_bdevs_list": [ 00:22:36.234 { 00:22:36.234 "name": "NewBaseBdev", 00:22:36.234 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:36.234 "is_configured": true, 00:22:36.234 "data_offset": 0, 00:22:36.234 "data_size": 65536 00:22:36.234 }, 00:22:36.234 { 00:22:36.234 "name": "BaseBdev2", 00:22:36.234 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:36.234 "is_configured": true, 00:22:36.234 "data_offset": 0, 00:22:36.234 "data_size": 65536 00:22:36.234 }, 00:22:36.234 { 00:22:36.234 "name": "BaseBdev3", 00:22:36.234 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:36.234 "is_configured": true, 00:22:36.234 "data_offset": 0, 00:22:36.234 "data_size": 65536 00:22:36.234 }, 00:22:36.234 { 00:22:36.234 "name": "BaseBdev4", 00:22:36.234 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:36.234 "is_configured": true, 00:22:36.234 "data_offset": 0, 00:22:36.234 "data_size": 65536 00:22:36.234 } 00:22:36.234 ] 00:22:36.234 }' 00:22:36.234 11:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:36.234 11:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:36.801 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:36.801 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:36.801 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:36.801 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:36.801 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:36.801 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:36.801 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:36.801 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:37.060 [2024-07-13 11:34:11.690599] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:37.060 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:37.060 "name": "Existed_Raid", 00:22:37.060 "aliases": [ 00:22:37.060 "36794898-52b8-4765-93d1-ec78e0268bfe" 00:22:37.060 ], 00:22:37.060 "product_name": "Raid Volume", 00:22:37.060 "block_size": 512, 00:22:37.060 "num_blocks": 262144, 00:22:37.060 "uuid": "36794898-52b8-4765-93d1-ec78e0268bfe", 00:22:37.060 "assigned_rate_limits": { 00:22:37.060 "rw_ios_per_sec": 0, 00:22:37.060 "rw_mbytes_per_sec": 0, 00:22:37.060 "r_mbytes_per_sec": 0, 00:22:37.060 "w_mbytes_per_sec": 0 00:22:37.060 }, 00:22:37.060 "claimed": false, 00:22:37.060 "zoned": false, 00:22:37.060 "supported_io_types": { 00:22:37.060 "read": true, 00:22:37.060 "write": true, 00:22:37.060 "unmap": true, 00:22:37.060 "flush": true, 00:22:37.060 "reset": true, 00:22:37.060 "nvme_admin": false, 00:22:37.060 "nvme_io": false, 00:22:37.060 "nvme_io_md": false, 00:22:37.060 "write_zeroes": true, 00:22:37.060 "zcopy": false, 00:22:37.060 "get_zone_info": false, 00:22:37.060 "zone_management": false, 00:22:37.060 "zone_append": false, 00:22:37.060 "compare": false, 00:22:37.060 "compare_and_write": false, 00:22:37.060 "abort": false, 00:22:37.060 "seek_hole": false, 00:22:37.060 "seek_data": false, 00:22:37.060 "copy": false, 00:22:37.060 "nvme_iov_md": false 00:22:37.060 }, 00:22:37.060 "memory_domains": [ 00:22:37.060 { 00:22:37.060 "dma_device_id": "system", 00:22:37.060 "dma_device_type": 1 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.060 "dma_device_type": 2 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "dma_device_id": "system", 00:22:37.060 "dma_device_type": 1 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.060 "dma_device_type": 2 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "dma_device_id": "system", 00:22:37.060 "dma_device_type": 1 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.060 "dma_device_type": 2 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "dma_device_id": "system", 00:22:37.060 "dma_device_type": 1 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.060 "dma_device_type": 2 00:22:37.060 } 00:22:37.060 ], 00:22:37.060 "driver_specific": { 00:22:37.060 "raid": { 00:22:37.060 "uuid": "36794898-52b8-4765-93d1-ec78e0268bfe", 00:22:37.060 "strip_size_kb": 64, 00:22:37.060 "state": "online", 00:22:37.060 "raid_level": "raid0", 00:22:37.060 "superblock": false, 00:22:37.060 "num_base_bdevs": 4, 00:22:37.060 "num_base_bdevs_discovered": 4, 00:22:37.060 "num_base_bdevs_operational": 4, 00:22:37.060 "base_bdevs_list": [ 00:22:37.060 { 00:22:37.060 "name": "NewBaseBdev", 00:22:37.060 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:37.060 "is_configured": true, 00:22:37.060 "data_offset": 0, 00:22:37.060 "data_size": 65536 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "name": "BaseBdev2", 00:22:37.060 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:37.060 "is_configured": true, 00:22:37.060 "data_offset": 0, 00:22:37.060 "data_size": 65536 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "name": "BaseBdev3", 00:22:37.060 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:37.060 "is_configured": true, 00:22:37.060 "data_offset": 0, 00:22:37.060 "data_size": 65536 00:22:37.060 }, 00:22:37.060 { 00:22:37.060 "name": "BaseBdev4", 00:22:37.060 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:37.060 "is_configured": true, 00:22:37.060 "data_offset": 0, 00:22:37.060 "data_size": 65536 00:22:37.060 } 00:22:37.060 ] 00:22:37.060 } 00:22:37.060 } 00:22:37.060 }' 00:22:37.060 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:37.060 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:37.060 BaseBdev2 00:22:37.060 BaseBdev3 00:22:37.060 BaseBdev4' 00:22:37.060 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:37.060 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:37.060 11:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:37.319 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:37.319 "name": "NewBaseBdev", 00:22:37.319 "aliases": [ 00:22:37.319 "633aa040-9608-4f81-ad4d-06ce3dbaa7bb" 00:22:37.319 ], 00:22:37.319 "product_name": "Malloc disk", 00:22:37.319 "block_size": 512, 00:22:37.319 "num_blocks": 65536, 00:22:37.319 "uuid": "633aa040-9608-4f81-ad4d-06ce3dbaa7bb", 00:22:37.319 "assigned_rate_limits": { 00:22:37.319 "rw_ios_per_sec": 0, 00:22:37.319 "rw_mbytes_per_sec": 0, 00:22:37.319 "r_mbytes_per_sec": 0, 00:22:37.319 "w_mbytes_per_sec": 0 00:22:37.319 }, 00:22:37.319 "claimed": true, 00:22:37.319 "claim_type": "exclusive_write", 00:22:37.319 "zoned": false, 00:22:37.319 "supported_io_types": { 00:22:37.319 "read": true, 00:22:37.319 "write": true, 00:22:37.319 "unmap": true, 00:22:37.319 "flush": true, 00:22:37.319 "reset": true, 00:22:37.319 "nvme_admin": false, 00:22:37.319 "nvme_io": false, 00:22:37.319 "nvme_io_md": false, 00:22:37.319 "write_zeroes": true, 00:22:37.319 "zcopy": true, 00:22:37.319 "get_zone_info": false, 00:22:37.319 "zone_management": false, 00:22:37.319 "zone_append": false, 00:22:37.319 "compare": false, 00:22:37.319 "compare_and_write": false, 00:22:37.319 "abort": true, 00:22:37.319 "seek_hole": false, 00:22:37.319 "seek_data": false, 00:22:37.319 "copy": true, 00:22:37.319 "nvme_iov_md": false 00:22:37.319 }, 00:22:37.319 "memory_domains": [ 00:22:37.319 { 00:22:37.319 "dma_device_id": "system", 00:22:37.319 "dma_device_type": 1 00:22:37.319 }, 00:22:37.319 { 00:22:37.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.319 "dma_device_type": 2 00:22:37.319 } 00:22:37.319 ], 00:22:37.319 "driver_specific": {} 00:22:37.319 }' 00:22:37.319 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:37.577 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:37.577 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:37.577 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:37.577 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:37.577 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:37.577 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:37.577 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:37.834 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:37.834 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:37.834 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:37.834 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:37.834 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:37.834 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:37.834 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:38.092 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:38.092 "name": "BaseBdev2", 00:22:38.092 "aliases": [ 00:22:38.092 "528a08fc-7429-443d-b501-9eabe052c42e" 00:22:38.092 ], 00:22:38.092 "product_name": "Malloc disk", 00:22:38.092 "block_size": 512, 00:22:38.092 "num_blocks": 65536, 00:22:38.092 "uuid": "528a08fc-7429-443d-b501-9eabe052c42e", 00:22:38.092 "assigned_rate_limits": { 00:22:38.092 "rw_ios_per_sec": 0, 00:22:38.092 "rw_mbytes_per_sec": 0, 00:22:38.092 "r_mbytes_per_sec": 0, 00:22:38.092 "w_mbytes_per_sec": 0 00:22:38.092 }, 00:22:38.092 "claimed": true, 00:22:38.092 "claim_type": "exclusive_write", 00:22:38.092 "zoned": false, 00:22:38.092 "supported_io_types": { 00:22:38.092 "read": true, 00:22:38.092 "write": true, 00:22:38.092 "unmap": true, 00:22:38.092 "flush": true, 00:22:38.092 "reset": true, 00:22:38.092 "nvme_admin": false, 00:22:38.092 "nvme_io": false, 00:22:38.092 "nvme_io_md": false, 00:22:38.092 "write_zeroes": true, 00:22:38.092 "zcopy": true, 00:22:38.092 "get_zone_info": false, 00:22:38.092 "zone_management": false, 00:22:38.092 "zone_append": false, 00:22:38.092 "compare": false, 00:22:38.092 "compare_and_write": false, 00:22:38.092 "abort": true, 00:22:38.092 "seek_hole": false, 00:22:38.092 "seek_data": false, 00:22:38.092 "copy": true, 00:22:38.092 "nvme_iov_md": false 00:22:38.092 }, 00:22:38.092 "memory_domains": [ 00:22:38.092 { 00:22:38.092 "dma_device_id": "system", 00:22:38.092 "dma_device_type": 1 00:22:38.092 }, 00:22:38.092 { 00:22:38.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.092 "dma_device_type": 2 00:22:38.092 } 00:22:38.092 ], 00:22:38.092 "driver_specific": {} 00:22:38.092 }' 00:22:38.092 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.092 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.092 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:38.092 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.349 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.349 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:38.349 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.349 11:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:38.349 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:38.349 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.607 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:38.607 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:38.607 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:38.607 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:38.607 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:38.865 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:38.865 "name": "BaseBdev3", 00:22:38.865 "aliases": [ 00:22:38.865 "89479b26-a6e4-4fe2-959c-afb0ae6d87fb" 00:22:38.865 ], 00:22:38.865 "product_name": "Malloc disk", 00:22:38.865 "block_size": 512, 00:22:38.865 "num_blocks": 65536, 00:22:38.865 "uuid": "89479b26-a6e4-4fe2-959c-afb0ae6d87fb", 00:22:38.865 "assigned_rate_limits": { 00:22:38.865 "rw_ios_per_sec": 0, 00:22:38.865 "rw_mbytes_per_sec": 0, 00:22:38.865 "r_mbytes_per_sec": 0, 00:22:38.865 "w_mbytes_per_sec": 0 00:22:38.865 }, 00:22:38.865 "claimed": true, 00:22:38.865 "claim_type": "exclusive_write", 00:22:38.865 "zoned": false, 00:22:38.865 "supported_io_types": { 00:22:38.865 "read": true, 00:22:38.865 "write": true, 00:22:38.865 "unmap": true, 00:22:38.865 "flush": true, 00:22:38.865 "reset": true, 00:22:38.865 "nvme_admin": false, 00:22:38.865 "nvme_io": false, 00:22:38.865 "nvme_io_md": false, 00:22:38.865 "write_zeroes": true, 00:22:38.865 "zcopy": true, 00:22:38.865 "get_zone_info": false, 00:22:38.865 "zone_management": false, 00:22:38.865 "zone_append": false, 00:22:38.865 "compare": false, 00:22:38.865 "compare_and_write": false, 00:22:38.865 "abort": true, 00:22:38.865 "seek_hole": false, 00:22:38.865 "seek_data": false, 00:22:38.865 "copy": true, 00:22:38.865 "nvme_iov_md": false 00:22:38.865 }, 00:22:38.865 "memory_domains": [ 00:22:38.865 { 00:22:38.865 "dma_device_id": "system", 00:22:38.865 "dma_device_type": 1 00:22:38.865 }, 00:22:38.865 { 00:22:38.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.865 "dma_device_type": 2 00:22:38.865 } 00:22:38.865 ], 00:22:38.865 "driver_specific": {} 00:22:38.865 }' 00:22:38.865 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.865 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:38.865 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:38.865 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:38.865 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.123 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:39.123 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.123 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.123 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:39.124 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.124 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.124 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:39.124 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:39.124 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:39.124 11:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:39.382 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:39.382 "name": "BaseBdev4", 00:22:39.382 "aliases": [ 00:22:39.382 "63f4fa03-9dfa-4712-a98e-25e7ad308b67" 00:22:39.382 ], 00:22:39.382 "product_name": "Malloc disk", 00:22:39.382 "block_size": 512, 00:22:39.382 "num_blocks": 65536, 00:22:39.382 "uuid": "63f4fa03-9dfa-4712-a98e-25e7ad308b67", 00:22:39.382 "assigned_rate_limits": { 00:22:39.382 "rw_ios_per_sec": 0, 00:22:39.382 "rw_mbytes_per_sec": 0, 00:22:39.382 "r_mbytes_per_sec": 0, 00:22:39.382 "w_mbytes_per_sec": 0 00:22:39.382 }, 00:22:39.382 "claimed": true, 00:22:39.382 "claim_type": "exclusive_write", 00:22:39.382 "zoned": false, 00:22:39.382 "supported_io_types": { 00:22:39.382 "read": true, 00:22:39.382 "write": true, 00:22:39.382 "unmap": true, 00:22:39.382 "flush": true, 00:22:39.382 "reset": true, 00:22:39.382 "nvme_admin": false, 00:22:39.382 "nvme_io": false, 00:22:39.382 "nvme_io_md": false, 00:22:39.382 "write_zeroes": true, 00:22:39.382 "zcopy": true, 00:22:39.382 "get_zone_info": false, 00:22:39.382 "zone_management": false, 00:22:39.382 "zone_append": false, 00:22:39.382 "compare": false, 00:22:39.382 "compare_and_write": false, 00:22:39.382 "abort": true, 00:22:39.382 "seek_hole": false, 00:22:39.382 "seek_data": false, 00:22:39.382 "copy": true, 00:22:39.382 "nvme_iov_md": false 00:22:39.382 }, 00:22:39.382 "memory_domains": [ 00:22:39.382 { 00:22:39.382 "dma_device_id": "system", 00:22:39.382 "dma_device_type": 1 00:22:39.382 }, 00:22:39.382 { 00:22:39.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:39.382 "dma_device_type": 2 00:22:39.382 } 00:22:39.382 ], 00:22:39.382 "driver_specific": {} 00:22:39.382 }' 00:22:39.382 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.640 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:39.640 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:39.640 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.640 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:39.640 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:39.640 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.899 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:39.899 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:39.899 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.899 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:39.899 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:39.899 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:40.159 [2024-07-13 11:34:14.830946] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:40.159 [2024-07-13 11:34:14.832112] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:40.159 [2024-07-13 11:34:14.832351] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.159 [2024-07-13 11:34:14.832536] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.159 [2024-07-13 11:34:14.832630] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 134453 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 134453 ']' 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 134453 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 134453 00:22:40.159 killing process with pid 134453 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 134453' 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 134453 00:22:40.159 11:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 134453 00:22:40.159 [2024-07-13 11:34:14.865387] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.417 [2024-07-13 11:34:15.144220] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:41.792 ************************************ 00:22:41.792 END TEST raid_state_function_test 00:22:41.792 ************************************ 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:41.792 00:22:41.792 real 0m34.815s 00:22:41.792 user 1m5.645s 00:22:41.792 sys 0m3.614s 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:41.792 11:34:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:41.792 11:34:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:22:41.792 11:34:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:41.792 11:34:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:41.792 11:34:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:41.792 ************************************ 00:22:41.792 START TEST raid_state_function_test_sb 00:22:41.792 ************************************ 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=135611 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 135611' 00:22:41.792 Process raid pid: 135611 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 135611 /var/tmp/spdk-raid.sock 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 135611 ']' 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:41.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.792 11:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:41.792 [2024-07-13 11:34:16.295544] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:41.792 [2024-07-13 11:34:16.295968] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.792 [2024-07-13 11:34:16.454780] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.050 [2024-07-13 11:34:16.657746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.308 [2024-07-13 11:34:16.822414] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.567 11:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.567 11:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:22:42.567 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:42.826 [2024-07-13 11:34:17.399611] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:42.826 [2024-07-13 11:34:17.399812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:42.826 [2024-07-13 11:34:17.399934] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:42.826 [2024-07-13 11:34:17.399990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:42.826 [2024-07-13 11:34:17.400076] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:42.826 [2024-07-13 11:34:17.400205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:42.826 [2024-07-13 11:34:17.400291] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:42.826 [2024-07-13 11:34:17.400341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.826 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.085 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.085 "name": "Existed_Raid", 00:22:43.085 "uuid": "f96a493c-79df-4431-947e-6655e1d919a1", 00:22:43.085 "strip_size_kb": 64, 00:22:43.085 "state": "configuring", 00:22:43.085 "raid_level": "raid0", 00:22:43.085 "superblock": true, 00:22:43.085 "num_base_bdevs": 4, 00:22:43.085 "num_base_bdevs_discovered": 0, 00:22:43.085 "num_base_bdevs_operational": 4, 00:22:43.085 "base_bdevs_list": [ 00:22:43.085 { 00:22:43.085 "name": "BaseBdev1", 00:22:43.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.085 "is_configured": false, 00:22:43.085 "data_offset": 0, 00:22:43.085 "data_size": 0 00:22:43.085 }, 00:22:43.085 { 00:22:43.085 "name": "BaseBdev2", 00:22:43.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.085 "is_configured": false, 00:22:43.085 "data_offset": 0, 00:22:43.085 "data_size": 0 00:22:43.085 }, 00:22:43.085 { 00:22:43.085 "name": "BaseBdev3", 00:22:43.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.085 "is_configured": false, 00:22:43.085 "data_offset": 0, 00:22:43.085 "data_size": 0 00:22:43.085 }, 00:22:43.085 { 00:22:43.085 "name": "BaseBdev4", 00:22:43.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.085 "is_configured": false, 00:22:43.085 "data_offset": 0, 00:22:43.085 "data_size": 0 00:22:43.085 } 00:22:43.085 ] 00:22:43.085 }' 00:22:43.085 11:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.085 11:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.653 11:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:43.911 [2024-07-13 11:34:18.555664] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:43.911 [2024-07-13 11:34:18.555861] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:43.911 11:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:44.169 [2024-07-13 11:34:18.791743] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.169 [2024-07-13 11:34:18.791894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.169 [2024-07-13 11:34:18.791986] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.169 [2024-07-13 11:34:18.792052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.169 [2024-07-13 11:34:18.792148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.169 [2024-07-13 11:34:18.792210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.169 [2024-07-13 11:34:18.792238] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:44.169 [2024-07-13 11:34:18.792341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:44.169 11:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:44.427 [2024-07-13 11:34:19.008907] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:44.427 BaseBdev1 00:22:44.427 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:44.427 11:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:44.427 11:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:44.427 11:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:44.427 11:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:44.427 11:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:44.427 11:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:44.686 [ 00:22:44.686 { 00:22:44.686 "name": "BaseBdev1", 00:22:44.686 "aliases": [ 00:22:44.686 "9932c8eb-f035-4e20-b9f9-928f0855a7bd" 00:22:44.686 ], 00:22:44.686 "product_name": "Malloc disk", 00:22:44.686 "block_size": 512, 00:22:44.686 "num_blocks": 65536, 00:22:44.686 "uuid": "9932c8eb-f035-4e20-b9f9-928f0855a7bd", 00:22:44.686 "assigned_rate_limits": { 00:22:44.686 "rw_ios_per_sec": 0, 00:22:44.686 "rw_mbytes_per_sec": 0, 00:22:44.686 "r_mbytes_per_sec": 0, 00:22:44.686 "w_mbytes_per_sec": 0 00:22:44.686 }, 00:22:44.686 "claimed": true, 00:22:44.686 "claim_type": "exclusive_write", 00:22:44.686 "zoned": false, 00:22:44.686 "supported_io_types": { 00:22:44.686 "read": true, 00:22:44.686 "write": true, 00:22:44.686 "unmap": true, 00:22:44.686 "flush": true, 00:22:44.686 "reset": true, 00:22:44.686 "nvme_admin": false, 00:22:44.686 "nvme_io": false, 00:22:44.686 "nvme_io_md": false, 00:22:44.686 "write_zeroes": true, 00:22:44.686 "zcopy": true, 00:22:44.686 "get_zone_info": false, 00:22:44.686 "zone_management": false, 00:22:44.686 "zone_append": false, 00:22:44.686 "compare": false, 00:22:44.686 "compare_and_write": false, 00:22:44.686 "abort": true, 00:22:44.686 "seek_hole": false, 00:22:44.686 "seek_data": false, 00:22:44.686 "copy": true, 00:22:44.686 "nvme_iov_md": false 00:22:44.686 }, 00:22:44.686 "memory_domains": [ 00:22:44.686 { 00:22:44.686 "dma_device_id": "system", 00:22:44.686 "dma_device_type": 1 00:22:44.686 }, 00:22:44.686 { 00:22:44.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.686 "dma_device_type": 2 00:22:44.686 } 00:22:44.686 ], 00:22:44.686 "driver_specific": {} 00:22:44.686 } 00:22:44.686 ] 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.686 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.945 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.945 "name": "Existed_Raid", 00:22:44.945 "uuid": "d8433f21-2b0f-4ecf-86ef-bb4631e2c296", 00:22:44.945 "strip_size_kb": 64, 00:22:44.945 "state": "configuring", 00:22:44.945 "raid_level": "raid0", 00:22:44.945 "superblock": true, 00:22:44.945 "num_base_bdevs": 4, 00:22:44.945 "num_base_bdevs_discovered": 1, 00:22:44.945 "num_base_bdevs_operational": 4, 00:22:44.945 "base_bdevs_list": [ 00:22:44.945 { 00:22:44.945 "name": "BaseBdev1", 00:22:44.945 "uuid": "9932c8eb-f035-4e20-b9f9-928f0855a7bd", 00:22:44.945 "is_configured": true, 00:22:44.945 "data_offset": 2048, 00:22:44.945 "data_size": 63488 00:22:44.945 }, 00:22:44.945 { 00:22:44.945 "name": "BaseBdev2", 00:22:44.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.945 "is_configured": false, 00:22:44.945 "data_offset": 0, 00:22:44.945 "data_size": 0 00:22:44.945 }, 00:22:44.945 { 00:22:44.945 "name": "BaseBdev3", 00:22:44.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.945 "is_configured": false, 00:22:44.945 "data_offset": 0, 00:22:44.945 "data_size": 0 00:22:44.945 }, 00:22:44.945 { 00:22:44.945 "name": "BaseBdev4", 00:22:44.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.945 "is_configured": false, 00:22:44.945 "data_offset": 0, 00:22:44.945 "data_size": 0 00:22:44.945 } 00:22:44.945 ] 00:22:44.945 }' 00:22:44.945 11:34:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.945 11:34:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.880 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:45.880 [2024-07-13 11:34:20.557218] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:45.880 [2024-07-13 11:34:20.557405] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:22:45.880 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:46.139 [2024-07-13 11:34:20.817295] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.139 [2024-07-13 11:34:20.819148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:46.139 [2024-07-13 11:34:20.819332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:46.139 [2024-07-13 11:34:20.819456] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:46.139 [2024-07-13 11:34:20.819575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:46.139 [2024-07-13 11:34:20.819662] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:46.139 [2024-07-13 11:34:20.819735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.139 11:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.398 11:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:46.398 "name": "Existed_Raid", 00:22:46.398 "uuid": "1a2c3a1e-63b1-4030-ac2f-b3c3ce262ea0", 00:22:46.398 "strip_size_kb": 64, 00:22:46.398 "state": "configuring", 00:22:46.398 "raid_level": "raid0", 00:22:46.398 "superblock": true, 00:22:46.398 "num_base_bdevs": 4, 00:22:46.398 "num_base_bdevs_discovered": 1, 00:22:46.398 "num_base_bdevs_operational": 4, 00:22:46.398 "base_bdevs_list": [ 00:22:46.398 { 00:22:46.398 "name": "BaseBdev1", 00:22:46.398 "uuid": "9932c8eb-f035-4e20-b9f9-928f0855a7bd", 00:22:46.398 "is_configured": true, 00:22:46.398 "data_offset": 2048, 00:22:46.398 "data_size": 63488 00:22:46.398 }, 00:22:46.398 { 00:22:46.398 "name": "BaseBdev2", 00:22:46.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.398 "is_configured": false, 00:22:46.398 "data_offset": 0, 00:22:46.398 "data_size": 0 00:22:46.398 }, 00:22:46.398 { 00:22:46.398 "name": "BaseBdev3", 00:22:46.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.398 "is_configured": false, 00:22:46.398 "data_offset": 0, 00:22:46.398 "data_size": 0 00:22:46.398 }, 00:22:46.398 { 00:22:46.398 "name": "BaseBdev4", 00:22:46.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.398 "is_configured": false, 00:22:46.398 "data_offset": 0, 00:22:46.398 "data_size": 0 00:22:46.398 } 00:22:46.398 ] 00:22:46.398 }' 00:22:46.398 11:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:46.398 11:34:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.333 11:34:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:47.333 [2024-07-13 11:34:22.028082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:47.333 BaseBdev2 00:22:47.333 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:47.333 11:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:47.333 11:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:47.333 11:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:47.333 11:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:47.333 11:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:47.333 11:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:47.591 11:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:47.850 [ 00:22:47.850 { 00:22:47.850 "name": "BaseBdev2", 00:22:47.850 "aliases": [ 00:22:47.850 "6ec698d6-8617-480a-87e9-b883073c99ae" 00:22:47.850 ], 00:22:47.850 "product_name": "Malloc disk", 00:22:47.850 "block_size": 512, 00:22:47.850 "num_blocks": 65536, 00:22:47.850 "uuid": "6ec698d6-8617-480a-87e9-b883073c99ae", 00:22:47.850 "assigned_rate_limits": { 00:22:47.850 "rw_ios_per_sec": 0, 00:22:47.850 "rw_mbytes_per_sec": 0, 00:22:47.850 "r_mbytes_per_sec": 0, 00:22:47.850 "w_mbytes_per_sec": 0 00:22:47.850 }, 00:22:47.850 "claimed": true, 00:22:47.850 "claim_type": "exclusive_write", 00:22:47.850 "zoned": false, 00:22:47.850 "supported_io_types": { 00:22:47.850 "read": true, 00:22:47.850 "write": true, 00:22:47.850 "unmap": true, 00:22:47.850 "flush": true, 00:22:47.850 "reset": true, 00:22:47.850 "nvme_admin": false, 00:22:47.850 "nvme_io": false, 00:22:47.850 "nvme_io_md": false, 00:22:47.850 "write_zeroes": true, 00:22:47.850 "zcopy": true, 00:22:47.850 "get_zone_info": false, 00:22:47.850 "zone_management": false, 00:22:47.850 "zone_append": false, 00:22:47.850 "compare": false, 00:22:47.850 "compare_and_write": false, 00:22:47.850 "abort": true, 00:22:47.850 "seek_hole": false, 00:22:47.850 "seek_data": false, 00:22:47.850 "copy": true, 00:22:47.850 "nvme_iov_md": false 00:22:47.850 }, 00:22:47.850 "memory_domains": [ 00:22:47.850 { 00:22:47.850 "dma_device_id": "system", 00:22:47.850 "dma_device_type": 1 00:22:47.850 }, 00:22:47.850 { 00:22:47.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.850 "dma_device_type": 2 00:22:47.850 } 00:22:47.850 ], 00:22:47.850 "driver_specific": {} 00:22:47.850 } 00:22:47.850 ] 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.850 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.108 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.108 "name": "Existed_Raid", 00:22:48.108 "uuid": "1a2c3a1e-63b1-4030-ac2f-b3c3ce262ea0", 00:22:48.108 "strip_size_kb": 64, 00:22:48.108 "state": "configuring", 00:22:48.108 "raid_level": "raid0", 00:22:48.108 "superblock": true, 00:22:48.108 "num_base_bdevs": 4, 00:22:48.108 "num_base_bdevs_discovered": 2, 00:22:48.108 "num_base_bdevs_operational": 4, 00:22:48.108 "base_bdevs_list": [ 00:22:48.108 { 00:22:48.108 "name": "BaseBdev1", 00:22:48.108 "uuid": "9932c8eb-f035-4e20-b9f9-928f0855a7bd", 00:22:48.108 "is_configured": true, 00:22:48.108 "data_offset": 2048, 00:22:48.108 "data_size": 63488 00:22:48.108 }, 00:22:48.108 { 00:22:48.108 "name": "BaseBdev2", 00:22:48.108 "uuid": "6ec698d6-8617-480a-87e9-b883073c99ae", 00:22:48.109 "is_configured": true, 00:22:48.109 "data_offset": 2048, 00:22:48.109 "data_size": 63488 00:22:48.109 }, 00:22:48.109 { 00:22:48.109 "name": "BaseBdev3", 00:22:48.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.109 "is_configured": false, 00:22:48.109 "data_offset": 0, 00:22:48.109 "data_size": 0 00:22:48.109 }, 00:22:48.109 { 00:22:48.109 "name": "BaseBdev4", 00:22:48.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.109 "is_configured": false, 00:22:48.109 "data_offset": 0, 00:22:48.109 "data_size": 0 00:22:48.109 } 00:22:48.109 ] 00:22:48.109 }' 00:22:48.109 11:34:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.109 11:34:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.673 11:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:49.239 [2024-07-13 11:34:23.680285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:49.239 BaseBdev3 00:22:49.239 11:34:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:49.239 11:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:49.239 11:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:49.239 11:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:49.239 11:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:49.239 11:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:49.239 11:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:49.239 11:34:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:49.497 [ 00:22:49.497 { 00:22:49.497 "name": "BaseBdev3", 00:22:49.497 "aliases": [ 00:22:49.497 "2ebfc0b0-bb9e-492b-80ab-41e23deb5b1c" 00:22:49.497 ], 00:22:49.497 "product_name": "Malloc disk", 00:22:49.497 "block_size": 512, 00:22:49.497 "num_blocks": 65536, 00:22:49.497 "uuid": "2ebfc0b0-bb9e-492b-80ab-41e23deb5b1c", 00:22:49.497 "assigned_rate_limits": { 00:22:49.498 "rw_ios_per_sec": 0, 00:22:49.498 "rw_mbytes_per_sec": 0, 00:22:49.498 "r_mbytes_per_sec": 0, 00:22:49.498 "w_mbytes_per_sec": 0 00:22:49.498 }, 00:22:49.498 "claimed": true, 00:22:49.498 "claim_type": "exclusive_write", 00:22:49.498 "zoned": false, 00:22:49.498 "supported_io_types": { 00:22:49.498 "read": true, 00:22:49.498 "write": true, 00:22:49.498 "unmap": true, 00:22:49.498 "flush": true, 00:22:49.498 "reset": true, 00:22:49.498 "nvme_admin": false, 00:22:49.498 "nvme_io": false, 00:22:49.498 "nvme_io_md": false, 00:22:49.498 "write_zeroes": true, 00:22:49.498 "zcopy": true, 00:22:49.498 "get_zone_info": false, 00:22:49.498 "zone_management": false, 00:22:49.498 "zone_append": false, 00:22:49.498 "compare": false, 00:22:49.498 "compare_and_write": false, 00:22:49.498 "abort": true, 00:22:49.498 "seek_hole": false, 00:22:49.498 "seek_data": false, 00:22:49.498 "copy": true, 00:22:49.498 "nvme_iov_md": false 00:22:49.498 }, 00:22:49.498 "memory_domains": [ 00:22:49.498 { 00:22:49.498 "dma_device_id": "system", 00:22:49.498 "dma_device_type": 1 00:22:49.498 }, 00:22:49.498 { 00:22:49.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.498 "dma_device_type": 2 00:22:49.498 } 00:22:49.498 ], 00:22:49.498 "driver_specific": {} 00:22:49.498 } 00:22:49.498 ] 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.498 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.756 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.756 "name": "Existed_Raid", 00:22:49.756 "uuid": "1a2c3a1e-63b1-4030-ac2f-b3c3ce262ea0", 00:22:49.756 "strip_size_kb": 64, 00:22:49.756 "state": "configuring", 00:22:49.756 "raid_level": "raid0", 00:22:49.756 "superblock": true, 00:22:49.756 "num_base_bdevs": 4, 00:22:49.756 "num_base_bdevs_discovered": 3, 00:22:49.756 "num_base_bdevs_operational": 4, 00:22:49.756 "base_bdevs_list": [ 00:22:49.756 { 00:22:49.756 "name": "BaseBdev1", 00:22:49.756 "uuid": "9932c8eb-f035-4e20-b9f9-928f0855a7bd", 00:22:49.756 "is_configured": true, 00:22:49.756 "data_offset": 2048, 00:22:49.756 "data_size": 63488 00:22:49.756 }, 00:22:49.756 { 00:22:49.756 "name": "BaseBdev2", 00:22:49.756 "uuid": "6ec698d6-8617-480a-87e9-b883073c99ae", 00:22:49.756 "is_configured": true, 00:22:49.756 "data_offset": 2048, 00:22:49.756 "data_size": 63488 00:22:49.757 }, 00:22:49.757 { 00:22:49.757 "name": "BaseBdev3", 00:22:49.757 "uuid": "2ebfc0b0-bb9e-492b-80ab-41e23deb5b1c", 00:22:49.757 "is_configured": true, 00:22:49.757 "data_offset": 2048, 00:22:49.757 "data_size": 63488 00:22:49.757 }, 00:22:49.757 { 00:22:49.757 "name": "BaseBdev4", 00:22:49.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:49.757 "is_configured": false, 00:22:49.757 "data_offset": 0, 00:22:49.757 "data_size": 0 00:22:49.757 } 00:22:49.757 ] 00:22:49.757 }' 00:22:49.757 11:34:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.757 11:34:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.694 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:50.694 [2024-07-13 11:34:25.396883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:50.694 [2024-07-13 11:34:25.397343] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:22:50.694 [2024-07-13 11:34:25.397496] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:50.694 BaseBdev4 00:22:50.694 [2024-07-13 11:34:25.397660] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:50.694 [2024-07-13 11:34:25.398128] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:22:50.694 [2024-07-13 11:34:25.398257] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:22:50.694 [2024-07-13 11:34:25.398490] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.694 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:50.694 11:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:50.694 11:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:50.694 11:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:50.694 11:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:50.694 11:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:50.694 11:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:50.952 11:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:51.211 [ 00:22:51.211 { 00:22:51.211 "name": "BaseBdev4", 00:22:51.211 "aliases": [ 00:22:51.211 "45aae820-b580-4c7f-bea7-9cbc8ab890e6" 00:22:51.211 ], 00:22:51.211 "product_name": "Malloc disk", 00:22:51.211 "block_size": 512, 00:22:51.211 "num_blocks": 65536, 00:22:51.211 "uuid": "45aae820-b580-4c7f-bea7-9cbc8ab890e6", 00:22:51.211 "assigned_rate_limits": { 00:22:51.211 "rw_ios_per_sec": 0, 00:22:51.211 "rw_mbytes_per_sec": 0, 00:22:51.211 "r_mbytes_per_sec": 0, 00:22:51.211 "w_mbytes_per_sec": 0 00:22:51.211 }, 00:22:51.211 "claimed": true, 00:22:51.211 "claim_type": "exclusive_write", 00:22:51.211 "zoned": false, 00:22:51.211 "supported_io_types": { 00:22:51.211 "read": true, 00:22:51.211 "write": true, 00:22:51.211 "unmap": true, 00:22:51.211 "flush": true, 00:22:51.211 "reset": true, 00:22:51.211 "nvme_admin": false, 00:22:51.211 "nvme_io": false, 00:22:51.211 "nvme_io_md": false, 00:22:51.211 "write_zeroes": true, 00:22:51.211 "zcopy": true, 00:22:51.211 "get_zone_info": false, 00:22:51.211 "zone_management": false, 00:22:51.211 "zone_append": false, 00:22:51.211 "compare": false, 00:22:51.211 "compare_and_write": false, 00:22:51.211 "abort": true, 00:22:51.211 "seek_hole": false, 00:22:51.211 "seek_data": false, 00:22:51.211 "copy": true, 00:22:51.211 "nvme_iov_md": false 00:22:51.211 }, 00:22:51.211 "memory_domains": [ 00:22:51.211 { 00:22:51.211 "dma_device_id": "system", 00:22:51.211 "dma_device_type": 1 00:22:51.211 }, 00:22:51.211 { 00:22:51.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.211 "dma_device_type": 2 00:22:51.211 } 00:22:51.211 ], 00:22:51.211 "driver_specific": {} 00:22:51.211 } 00:22:51.211 ] 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.211 11:34:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.470 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.470 "name": "Existed_Raid", 00:22:51.470 "uuid": "1a2c3a1e-63b1-4030-ac2f-b3c3ce262ea0", 00:22:51.470 "strip_size_kb": 64, 00:22:51.470 "state": "online", 00:22:51.470 "raid_level": "raid0", 00:22:51.470 "superblock": true, 00:22:51.470 "num_base_bdevs": 4, 00:22:51.470 "num_base_bdevs_discovered": 4, 00:22:51.470 "num_base_bdevs_operational": 4, 00:22:51.470 "base_bdevs_list": [ 00:22:51.470 { 00:22:51.470 "name": "BaseBdev1", 00:22:51.470 "uuid": "9932c8eb-f035-4e20-b9f9-928f0855a7bd", 00:22:51.470 "is_configured": true, 00:22:51.470 "data_offset": 2048, 00:22:51.470 "data_size": 63488 00:22:51.470 }, 00:22:51.470 { 00:22:51.470 "name": "BaseBdev2", 00:22:51.470 "uuid": "6ec698d6-8617-480a-87e9-b883073c99ae", 00:22:51.470 "is_configured": true, 00:22:51.470 "data_offset": 2048, 00:22:51.470 "data_size": 63488 00:22:51.470 }, 00:22:51.470 { 00:22:51.470 "name": "BaseBdev3", 00:22:51.470 "uuid": "2ebfc0b0-bb9e-492b-80ab-41e23deb5b1c", 00:22:51.470 "is_configured": true, 00:22:51.470 "data_offset": 2048, 00:22:51.470 "data_size": 63488 00:22:51.470 }, 00:22:51.470 { 00:22:51.470 "name": "BaseBdev4", 00:22:51.470 "uuid": "45aae820-b580-4c7f-bea7-9cbc8ab890e6", 00:22:51.470 "is_configured": true, 00:22:51.470 "data_offset": 2048, 00:22:51.470 "data_size": 63488 00:22:51.470 } 00:22:51.470 ] 00:22:51.470 }' 00:22:51.470 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.470 11:34:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.036 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:52.036 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:52.036 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:52.036 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:52.036 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:52.036 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:52.036 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:52.036 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:52.294 [2024-07-13 11:34:26.961430] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:52.294 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:52.294 "name": "Existed_Raid", 00:22:52.294 "aliases": [ 00:22:52.294 "1a2c3a1e-63b1-4030-ac2f-b3c3ce262ea0" 00:22:52.294 ], 00:22:52.294 "product_name": "Raid Volume", 00:22:52.294 "block_size": 512, 00:22:52.294 "num_blocks": 253952, 00:22:52.294 "uuid": "1a2c3a1e-63b1-4030-ac2f-b3c3ce262ea0", 00:22:52.294 "assigned_rate_limits": { 00:22:52.294 "rw_ios_per_sec": 0, 00:22:52.294 "rw_mbytes_per_sec": 0, 00:22:52.294 "r_mbytes_per_sec": 0, 00:22:52.294 "w_mbytes_per_sec": 0 00:22:52.294 }, 00:22:52.294 "claimed": false, 00:22:52.294 "zoned": false, 00:22:52.294 "supported_io_types": { 00:22:52.294 "read": true, 00:22:52.294 "write": true, 00:22:52.294 "unmap": true, 00:22:52.294 "flush": true, 00:22:52.294 "reset": true, 00:22:52.294 "nvme_admin": false, 00:22:52.294 "nvme_io": false, 00:22:52.294 "nvme_io_md": false, 00:22:52.294 "write_zeroes": true, 00:22:52.294 "zcopy": false, 00:22:52.294 "get_zone_info": false, 00:22:52.294 "zone_management": false, 00:22:52.294 "zone_append": false, 00:22:52.294 "compare": false, 00:22:52.294 "compare_and_write": false, 00:22:52.294 "abort": false, 00:22:52.294 "seek_hole": false, 00:22:52.294 "seek_data": false, 00:22:52.294 "copy": false, 00:22:52.294 "nvme_iov_md": false 00:22:52.294 }, 00:22:52.294 "memory_domains": [ 00:22:52.294 { 00:22:52.294 "dma_device_id": "system", 00:22:52.294 "dma_device_type": 1 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.294 "dma_device_type": 2 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "dma_device_id": "system", 00:22:52.294 "dma_device_type": 1 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.294 "dma_device_type": 2 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "dma_device_id": "system", 00:22:52.294 "dma_device_type": 1 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.294 "dma_device_type": 2 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "dma_device_id": "system", 00:22:52.294 "dma_device_type": 1 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.294 "dma_device_type": 2 00:22:52.294 } 00:22:52.294 ], 00:22:52.294 "driver_specific": { 00:22:52.294 "raid": { 00:22:52.294 "uuid": "1a2c3a1e-63b1-4030-ac2f-b3c3ce262ea0", 00:22:52.294 "strip_size_kb": 64, 00:22:52.294 "state": "online", 00:22:52.294 "raid_level": "raid0", 00:22:52.294 "superblock": true, 00:22:52.294 "num_base_bdevs": 4, 00:22:52.294 "num_base_bdevs_discovered": 4, 00:22:52.294 "num_base_bdevs_operational": 4, 00:22:52.294 "base_bdevs_list": [ 00:22:52.294 { 00:22:52.294 "name": "BaseBdev1", 00:22:52.294 "uuid": "9932c8eb-f035-4e20-b9f9-928f0855a7bd", 00:22:52.294 "is_configured": true, 00:22:52.294 "data_offset": 2048, 00:22:52.294 "data_size": 63488 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "name": "BaseBdev2", 00:22:52.294 "uuid": "6ec698d6-8617-480a-87e9-b883073c99ae", 00:22:52.294 "is_configured": true, 00:22:52.294 "data_offset": 2048, 00:22:52.294 "data_size": 63488 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "name": "BaseBdev3", 00:22:52.294 "uuid": "2ebfc0b0-bb9e-492b-80ab-41e23deb5b1c", 00:22:52.294 "is_configured": true, 00:22:52.294 "data_offset": 2048, 00:22:52.294 "data_size": 63488 00:22:52.294 }, 00:22:52.294 { 00:22:52.294 "name": "BaseBdev4", 00:22:52.294 "uuid": "45aae820-b580-4c7f-bea7-9cbc8ab890e6", 00:22:52.294 "is_configured": true, 00:22:52.294 "data_offset": 2048, 00:22:52.294 "data_size": 63488 00:22:52.294 } 00:22:52.294 ] 00:22:52.294 } 00:22:52.294 } 00:22:52.294 }' 00:22:52.294 11:34:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:52.294 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:52.294 BaseBdev2 00:22:52.294 BaseBdev3 00:22:52.294 BaseBdev4' 00:22:52.294 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:52.294 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:52.294 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:52.555 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:52.555 "name": "BaseBdev1", 00:22:52.555 "aliases": [ 00:22:52.555 "9932c8eb-f035-4e20-b9f9-928f0855a7bd" 00:22:52.555 ], 00:22:52.555 "product_name": "Malloc disk", 00:22:52.555 "block_size": 512, 00:22:52.555 "num_blocks": 65536, 00:22:52.555 "uuid": "9932c8eb-f035-4e20-b9f9-928f0855a7bd", 00:22:52.555 "assigned_rate_limits": { 00:22:52.555 "rw_ios_per_sec": 0, 00:22:52.555 "rw_mbytes_per_sec": 0, 00:22:52.555 "r_mbytes_per_sec": 0, 00:22:52.555 "w_mbytes_per_sec": 0 00:22:52.555 }, 00:22:52.555 "claimed": true, 00:22:52.555 "claim_type": "exclusive_write", 00:22:52.555 "zoned": false, 00:22:52.555 "supported_io_types": { 00:22:52.555 "read": true, 00:22:52.555 "write": true, 00:22:52.555 "unmap": true, 00:22:52.555 "flush": true, 00:22:52.555 "reset": true, 00:22:52.555 "nvme_admin": false, 00:22:52.555 "nvme_io": false, 00:22:52.555 "nvme_io_md": false, 00:22:52.555 "write_zeroes": true, 00:22:52.555 "zcopy": true, 00:22:52.555 "get_zone_info": false, 00:22:52.555 "zone_management": false, 00:22:52.555 "zone_append": false, 00:22:52.555 "compare": false, 00:22:52.555 "compare_and_write": false, 00:22:52.555 "abort": true, 00:22:52.555 "seek_hole": false, 00:22:52.555 "seek_data": false, 00:22:52.555 "copy": true, 00:22:52.555 "nvme_iov_md": false 00:22:52.555 }, 00:22:52.555 "memory_domains": [ 00:22:52.555 { 00:22:52.555 "dma_device_id": "system", 00:22:52.555 "dma_device_type": 1 00:22:52.555 }, 00:22:52.555 { 00:22:52.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.555 "dma_device_type": 2 00:22:52.555 } 00:22:52.555 ], 00:22:52.555 "driver_specific": {} 00:22:52.555 }' 00:22:52.555 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:52.555 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:52.821 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:52.821 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:52.821 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:52.821 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:52.821 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:52.821 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:52.821 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:52.821 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.083 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.083 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:53.083 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.083 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:53.083 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:53.342 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:53.342 "name": "BaseBdev2", 00:22:53.342 "aliases": [ 00:22:53.342 "6ec698d6-8617-480a-87e9-b883073c99ae" 00:22:53.342 ], 00:22:53.342 "product_name": "Malloc disk", 00:22:53.342 "block_size": 512, 00:22:53.342 "num_blocks": 65536, 00:22:53.342 "uuid": "6ec698d6-8617-480a-87e9-b883073c99ae", 00:22:53.342 "assigned_rate_limits": { 00:22:53.342 "rw_ios_per_sec": 0, 00:22:53.342 "rw_mbytes_per_sec": 0, 00:22:53.342 "r_mbytes_per_sec": 0, 00:22:53.342 "w_mbytes_per_sec": 0 00:22:53.342 }, 00:22:53.342 "claimed": true, 00:22:53.342 "claim_type": "exclusive_write", 00:22:53.342 "zoned": false, 00:22:53.342 "supported_io_types": { 00:22:53.342 "read": true, 00:22:53.342 "write": true, 00:22:53.342 "unmap": true, 00:22:53.342 "flush": true, 00:22:53.342 "reset": true, 00:22:53.342 "nvme_admin": false, 00:22:53.342 "nvme_io": false, 00:22:53.342 "nvme_io_md": false, 00:22:53.342 "write_zeroes": true, 00:22:53.342 "zcopy": true, 00:22:53.342 "get_zone_info": false, 00:22:53.342 "zone_management": false, 00:22:53.342 "zone_append": false, 00:22:53.342 "compare": false, 00:22:53.342 "compare_and_write": false, 00:22:53.342 "abort": true, 00:22:53.342 "seek_hole": false, 00:22:53.342 "seek_data": false, 00:22:53.342 "copy": true, 00:22:53.342 "nvme_iov_md": false 00:22:53.342 }, 00:22:53.342 "memory_domains": [ 00:22:53.342 { 00:22:53.342 "dma_device_id": "system", 00:22:53.342 "dma_device_type": 1 00:22:53.342 }, 00:22:53.342 { 00:22:53.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.342 "dma_device_type": 2 00:22:53.342 } 00:22:53.342 ], 00:22:53.342 "driver_specific": {} 00:22:53.342 }' 00:22:53.342 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.342 11:34:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.342 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:53.342 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.342 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:53.601 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:53.860 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:53.860 "name": "BaseBdev3", 00:22:53.860 "aliases": [ 00:22:53.860 "2ebfc0b0-bb9e-492b-80ab-41e23deb5b1c" 00:22:53.860 ], 00:22:53.860 "product_name": "Malloc disk", 00:22:53.860 "block_size": 512, 00:22:53.860 "num_blocks": 65536, 00:22:53.860 "uuid": "2ebfc0b0-bb9e-492b-80ab-41e23deb5b1c", 00:22:53.860 "assigned_rate_limits": { 00:22:53.860 "rw_ios_per_sec": 0, 00:22:53.860 "rw_mbytes_per_sec": 0, 00:22:53.860 "r_mbytes_per_sec": 0, 00:22:53.860 "w_mbytes_per_sec": 0 00:22:53.860 }, 00:22:53.860 "claimed": true, 00:22:53.860 "claim_type": "exclusive_write", 00:22:53.860 "zoned": false, 00:22:53.860 "supported_io_types": { 00:22:53.860 "read": true, 00:22:53.860 "write": true, 00:22:53.860 "unmap": true, 00:22:53.860 "flush": true, 00:22:53.860 "reset": true, 00:22:53.860 "nvme_admin": false, 00:22:53.860 "nvme_io": false, 00:22:53.860 "nvme_io_md": false, 00:22:53.860 "write_zeroes": true, 00:22:53.860 "zcopy": true, 00:22:53.860 "get_zone_info": false, 00:22:53.860 "zone_management": false, 00:22:53.860 "zone_append": false, 00:22:53.860 "compare": false, 00:22:53.860 "compare_and_write": false, 00:22:53.860 "abort": true, 00:22:53.860 "seek_hole": false, 00:22:53.860 "seek_data": false, 00:22:53.860 "copy": true, 00:22:53.860 "nvme_iov_md": false 00:22:53.860 }, 00:22:53.860 "memory_domains": [ 00:22:53.860 { 00:22:53.860 "dma_device_id": "system", 00:22:53.860 "dma_device_type": 1 00:22:53.860 }, 00:22:53.860 { 00:22:53.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.860 "dma_device_type": 2 00:22:53.860 } 00:22:53.860 ], 00:22:53.860 "driver_specific": {} 00:22:53.860 }' 00:22:53.860 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.119 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.119 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.119 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.119 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.119 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.119 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.377 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.377 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.377 11:34:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.377 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.377 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.377 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.377 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:54.377 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.634 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.634 "name": "BaseBdev4", 00:22:54.634 "aliases": [ 00:22:54.634 "45aae820-b580-4c7f-bea7-9cbc8ab890e6" 00:22:54.634 ], 00:22:54.634 "product_name": "Malloc disk", 00:22:54.634 "block_size": 512, 00:22:54.634 "num_blocks": 65536, 00:22:54.634 "uuid": "45aae820-b580-4c7f-bea7-9cbc8ab890e6", 00:22:54.634 "assigned_rate_limits": { 00:22:54.634 "rw_ios_per_sec": 0, 00:22:54.634 "rw_mbytes_per_sec": 0, 00:22:54.634 "r_mbytes_per_sec": 0, 00:22:54.634 "w_mbytes_per_sec": 0 00:22:54.634 }, 00:22:54.634 "claimed": true, 00:22:54.634 "claim_type": "exclusive_write", 00:22:54.634 "zoned": false, 00:22:54.634 "supported_io_types": { 00:22:54.634 "read": true, 00:22:54.634 "write": true, 00:22:54.634 "unmap": true, 00:22:54.634 "flush": true, 00:22:54.634 "reset": true, 00:22:54.634 "nvme_admin": false, 00:22:54.634 "nvme_io": false, 00:22:54.634 "nvme_io_md": false, 00:22:54.634 "write_zeroes": true, 00:22:54.634 "zcopy": true, 00:22:54.634 "get_zone_info": false, 00:22:54.634 "zone_management": false, 00:22:54.634 "zone_append": false, 00:22:54.634 "compare": false, 00:22:54.634 "compare_and_write": false, 00:22:54.634 "abort": true, 00:22:54.634 "seek_hole": false, 00:22:54.634 "seek_data": false, 00:22:54.634 "copy": true, 00:22:54.634 "nvme_iov_md": false 00:22:54.634 }, 00:22:54.634 "memory_domains": [ 00:22:54.634 { 00:22:54.634 "dma_device_id": "system", 00:22:54.634 "dma_device_type": 1 00:22:54.634 }, 00:22:54.634 { 00:22:54.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.634 "dma_device_type": 2 00:22:54.634 } 00:22:54.634 ], 00:22:54.634 "driver_specific": {} 00:22:54.634 }' 00:22:54.634 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.892 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.892 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.892 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.892 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.892 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.892 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.892 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.149 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.149 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.149 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.149 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.149 11:34:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:55.407 [2024-07-13 11:34:29.957865] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:55.407 [2024-07-13 11:34:29.958019] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.407 [2024-07-13 11:34:29.958204] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.407 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:55.666 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:55.666 "name": "Existed_Raid", 00:22:55.666 "uuid": "1a2c3a1e-63b1-4030-ac2f-b3c3ce262ea0", 00:22:55.666 "strip_size_kb": 64, 00:22:55.666 "state": "offline", 00:22:55.666 "raid_level": "raid0", 00:22:55.666 "superblock": true, 00:22:55.666 "num_base_bdevs": 4, 00:22:55.666 "num_base_bdevs_discovered": 3, 00:22:55.666 "num_base_bdevs_operational": 3, 00:22:55.666 "base_bdevs_list": [ 00:22:55.666 { 00:22:55.666 "name": null, 00:22:55.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.666 "is_configured": false, 00:22:55.666 "data_offset": 2048, 00:22:55.666 "data_size": 63488 00:22:55.666 }, 00:22:55.666 { 00:22:55.666 "name": "BaseBdev2", 00:22:55.666 "uuid": "6ec698d6-8617-480a-87e9-b883073c99ae", 00:22:55.666 "is_configured": true, 00:22:55.666 "data_offset": 2048, 00:22:55.666 "data_size": 63488 00:22:55.666 }, 00:22:55.666 { 00:22:55.666 "name": "BaseBdev3", 00:22:55.666 "uuid": "2ebfc0b0-bb9e-492b-80ab-41e23deb5b1c", 00:22:55.666 "is_configured": true, 00:22:55.666 "data_offset": 2048, 00:22:55.666 "data_size": 63488 00:22:55.666 }, 00:22:55.666 { 00:22:55.666 "name": "BaseBdev4", 00:22:55.666 "uuid": "45aae820-b580-4c7f-bea7-9cbc8ab890e6", 00:22:55.666 "is_configured": true, 00:22:55.666 "data_offset": 2048, 00:22:55.666 "data_size": 63488 00:22:55.666 } 00:22:55.666 ] 00:22:55.666 }' 00:22:55.666 11:34:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:55.666 11:34:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.600 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:56.600 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:56.600 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.600 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:56.600 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:56.600 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:56.600 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:56.870 [2024-07-13 11:34:31.445179] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:56.870 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:56.870 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:56.870 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.870 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:57.129 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:57.129 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:57.129 11:34:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:57.387 [2024-07-13 11:34:32.046961] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:57.387 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:57.387 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:57.387 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.387 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:57.645 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:57.645 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:57.645 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:57.903 [2024-07-13 11:34:32.542518] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:57.903 [2024-07-13 11:34:32.542899] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:22:57.903 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:57.903 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:57.903 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.903 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:58.161 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:58.161 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:58.161 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:58.161 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:58.161 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:58.161 11:34:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:58.419 BaseBdev2 00:22:58.419 11:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:58.419 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:58.419 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:58.419 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:58.419 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:58.419 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:58.419 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:58.677 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:58.935 [ 00:22:58.935 { 00:22:58.935 "name": "BaseBdev2", 00:22:58.935 "aliases": [ 00:22:58.935 "fc685573-1656-4fdf-a157-c6856f9461d8" 00:22:58.935 ], 00:22:58.935 "product_name": "Malloc disk", 00:22:58.935 "block_size": 512, 00:22:58.935 "num_blocks": 65536, 00:22:58.935 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:22:58.935 "assigned_rate_limits": { 00:22:58.935 "rw_ios_per_sec": 0, 00:22:58.935 "rw_mbytes_per_sec": 0, 00:22:58.935 "r_mbytes_per_sec": 0, 00:22:58.935 "w_mbytes_per_sec": 0 00:22:58.935 }, 00:22:58.935 "claimed": false, 00:22:58.935 "zoned": false, 00:22:58.935 "supported_io_types": { 00:22:58.935 "read": true, 00:22:58.935 "write": true, 00:22:58.935 "unmap": true, 00:22:58.935 "flush": true, 00:22:58.935 "reset": true, 00:22:58.935 "nvme_admin": false, 00:22:58.935 "nvme_io": false, 00:22:58.935 "nvme_io_md": false, 00:22:58.935 "write_zeroes": true, 00:22:58.935 "zcopy": true, 00:22:58.935 "get_zone_info": false, 00:22:58.935 "zone_management": false, 00:22:58.935 "zone_append": false, 00:22:58.935 "compare": false, 00:22:58.935 "compare_and_write": false, 00:22:58.935 "abort": true, 00:22:58.935 "seek_hole": false, 00:22:58.935 "seek_data": false, 00:22:58.935 "copy": true, 00:22:58.935 "nvme_iov_md": false 00:22:58.935 }, 00:22:58.935 "memory_domains": [ 00:22:58.935 { 00:22:58.935 "dma_device_id": "system", 00:22:58.935 "dma_device_type": 1 00:22:58.935 }, 00:22:58.935 { 00:22:58.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.935 "dma_device_type": 2 00:22:58.935 } 00:22:58.935 ], 00:22:58.935 "driver_specific": {} 00:22:58.935 } 00:22:58.935 ] 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:58.935 BaseBdev3 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:58.935 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:59.193 11:34:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:59.451 [ 00:22:59.451 { 00:22:59.451 "name": "BaseBdev3", 00:22:59.451 "aliases": [ 00:22:59.451 "8deadec4-1e07-4e54-92ab-02adc660cecc" 00:22:59.451 ], 00:22:59.451 "product_name": "Malloc disk", 00:22:59.451 "block_size": 512, 00:22:59.451 "num_blocks": 65536, 00:22:59.451 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:22:59.451 "assigned_rate_limits": { 00:22:59.451 "rw_ios_per_sec": 0, 00:22:59.451 "rw_mbytes_per_sec": 0, 00:22:59.451 "r_mbytes_per_sec": 0, 00:22:59.451 "w_mbytes_per_sec": 0 00:22:59.451 }, 00:22:59.451 "claimed": false, 00:22:59.451 "zoned": false, 00:22:59.451 "supported_io_types": { 00:22:59.451 "read": true, 00:22:59.451 "write": true, 00:22:59.451 "unmap": true, 00:22:59.451 "flush": true, 00:22:59.451 "reset": true, 00:22:59.451 "nvme_admin": false, 00:22:59.451 "nvme_io": false, 00:22:59.451 "nvme_io_md": false, 00:22:59.451 "write_zeroes": true, 00:22:59.451 "zcopy": true, 00:22:59.451 "get_zone_info": false, 00:22:59.451 "zone_management": false, 00:22:59.451 "zone_append": false, 00:22:59.451 "compare": false, 00:22:59.451 "compare_and_write": false, 00:22:59.451 "abort": true, 00:22:59.451 "seek_hole": false, 00:22:59.451 "seek_data": false, 00:22:59.451 "copy": true, 00:22:59.451 "nvme_iov_md": false 00:22:59.451 }, 00:22:59.451 "memory_domains": [ 00:22:59.451 { 00:22:59.451 "dma_device_id": "system", 00:22:59.451 "dma_device_type": 1 00:22:59.451 }, 00:22:59.451 { 00:22:59.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.451 "dma_device_type": 2 00:22:59.451 } 00:22:59.451 ], 00:22:59.451 "driver_specific": {} 00:22:59.451 } 00:22:59.451 ] 00:22:59.451 11:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:59.452 11:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:59.452 11:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:59.452 11:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:59.710 BaseBdev4 00:22:59.710 11:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:59.710 11:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:59.710 11:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:59.710 11:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:59.710 11:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:59.710 11:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:59.710 11:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:59.969 11:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:00.228 [ 00:23:00.228 { 00:23:00.228 "name": "BaseBdev4", 00:23:00.228 "aliases": [ 00:23:00.228 "241b89b1-811f-4f4b-8b65-38d0f8b71d5b" 00:23:00.228 ], 00:23:00.228 "product_name": "Malloc disk", 00:23:00.228 "block_size": 512, 00:23:00.228 "num_blocks": 65536, 00:23:00.228 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:00.228 "assigned_rate_limits": { 00:23:00.228 "rw_ios_per_sec": 0, 00:23:00.228 "rw_mbytes_per_sec": 0, 00:23:00.228 "r_mbytes_per_sec": 0, 00:23:00.228 "w_mbytes_per_sec": 0 00:23:00.228 }, 00:23:00.228 "claimed": false, 00:23:00.228 "zoned": false, 00:23:00.228 "supported_io_types": { 00:23:00.228 "read": true, 00:23:00.228 "write": true, 00:23:00.228 "unmap": true, 00:23:00.228 "flush": true, 00:23:00.228 "reset": true, 00:23:00.228 "nvme_admin": false, 00:23:00.228 "nvme_io": false, 00:23:00.228 "nvme_io_md": false, 00:23:00.228 "write_zeroes": true, 00:23:00.228 "zcopy": true, 00:23:00.228 "get_zone_info": false, 00:23:00.228 "zone_management": false, 00:23:00.228 "zone_append": false, 00:23:00.228 "compare": false, 00:23:00.228 "compare_and_write": false, 00:23:00.228 "abort": true, 00:23:00.228 "seek_hole": false, 00:23:00.228 "seek_data": false, 00:23:00.228 "copy": true, 00:23:00.228 "nvme_iov_md": false 00:23:00.228 }, 00:23:00.228 "memory_domains": [ 00:23:00.228 { 00:23:00.228 "dma_device_id": "system", 00:23:00.228 "dma_device_type": 1 00:23:00.228 }, 00:23:00.228 { 00:23:00.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.228 "dma_device_type": 2 00:23:00.228 } 00:23:00.228 ], 00:23:00.228 "driver_specific": {} 00:23:00.228 } 00:23:00.228 ] 00:23:00.228 11:34:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:00.228 11:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:00.228 11:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:00.228 11:34:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:00.487 [2024-07-13 11:34:35.019853] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:00.487 [2024-07-13 11:34:35.021089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:00.487 [2024-07-13 11:34:35.021264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.487 [2024-07-13 11:34:35.023005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:00.487 [2024-07-13 11:34:35.023207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.487 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.746 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:00.746 "name": "Existed_Raid", 00:23:00.746 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:00.746 "strip_size_kb": 64, 00:23:00.746 "state": "configuring", 00:23:00.746 "raid_level": "raid0", 00:23:00.746 "superblock": true, 00:23:00.746 "num_base_bdevs": 4, 00:23:00.746 "num_base_bdevs_discovered": 3, 00:23:00.746 "num_base_bdevs_operational": 4, 00:23:00.746 "base_bdevs_list": [ 00:23:00.746 { 00:23:00.746 "name": "BaseBdev1", 00:23:00.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.746 "is_configured": false, 00:23:00.746 "data_offset": 0, 00:23:00.746 "data_size": 0 00:23:00.746 }, 00:23:00.746 { 00:23:00.746 "name": "BaseBdev2", 00:23:00.746 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:00.746 "is_configured": true, 00:23:00.746 "data_offset": 2048, 00:23:00.746 "data_size": 63488 00:23:00.746 }, 00:23:00.746 { 00:23:00.746 "name": "BaseBdev3", 00:23:00.746 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:00.746 "is_configured": true, 00:23:00.746 "data_offset": 2048, 00:23:00.746 "data_size": 63488 00:23:00.746 }, 00:23:00.746 { 00:23:00.746 "name": "BaseBdev4", 00:23:00.746 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:00.746 "is_configured": true, 00:23:00.746 "data_offset": 2048, 00:23:00.746 "data_size": 63488 00:23:00.746 } 00:23:00.746 ] 00:23:00.746 }' 00:23:00.746 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:00.746 11:34:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:01.314 11:34:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:01.572 [2024-07-13 11:34:36.180006] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.572 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.832 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.832 "name": "Existed_Raid", 00:23:01.832 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:01.832 "strip_size_kb": 64, 00:23:01.832 "state": "configuring", 00:23:01.832 "raid_level": "raid0", 00:23:01.832 "superblock": true, 00:23:01.832 "num_base_bdevs": 4, 00:23:01.832 "num_base_bdevs_discovered": 2, 00:23:01.832 "num_base_bdevs_operational": 4, 00:23:01.832 "base_bdevs_list": [ 00:23:01.832 { 00:23:01.832 "name": "BaseBdev1", 00:23:01.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.832 "is_configured": false, 00:23:01.832 "data_offset": 0, 00:23:01.832 "data_size": 0 00:23:01.832 }, 00:23:01.832 { 00:23:01.832 "name": null, 00:23:01.832 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:01.832 "is_configured": false, 00:23:01.832 "data_offset": 2048, 00:23:01.832 "data_size": 63488 00:23:01.832 }, 00:23:01.832 { 00:23:01.832 "name": "BaseBdev3", 00:23:01.832 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:01.832 "is_configured": true, 00:23:01.832 "data_offset": 2048, 00:23:01.832 "data_size": 63488 00:23:01.832 }, 00:23:01.832 { 00:23:01.832 "name": "BaseBdev4", 00:23:01.832 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:01.832 "is_configured": true, 00:23:01.832 "data_offset": 2048, 00:23:01.832 "data_size": 63488 00:23:01.832 } 00:23:01.832 ] 00:23:01.832 }' 00:23:01.832 11:34:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.832 11:34:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:02.399 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.399 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:02.658 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:02.658 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:02.916 [2024-07-13 11:34:37.426077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:02.916 BaseBdev1 00:23:02.916 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:02.916 11:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:02.916 11:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:02.916 11:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:02.916 11:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:02.916 11:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:02.916 11:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:03.175 [ 00:23:03.175 { 00:23:03.175 "name": "BaseBdev1", 00:23:03.175 "aliases": [ 00:23:03.175 "a19a7439-8471-4f3c-823e-6914fa440978" 00:23:03.175 ], 00:23:03.175 "product_name": "Malloc disk", 00:23:03.175 "block_size": 512, 00:23:03.175 "num_blocks": 65536, 00:23:03.175 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:03.175 "assigned_rate_limits": { 00:23:03.175 "rw_ios_per_sec": 0, 00:23:03.175 "rw_mbytes_per_sec": 0, 00:23:03.175 "r_mbytes_per_sec": 0, 00:23:03.175 "w_mbytes_per_sec": 0 00:23:03.175 }, 00:23:03.175 "claimed": true, 00:23:03.175 "claim_type": "exclusive_write", 00:23:03.175 "zoned": false, 00:23:03.175 "supported_io_types": { 00:23:03.175 "read": true, 00:23:03.175 "write": true, 00:23:03.175 "unmap": true, 00:23:03.175 "flush": true, 00:23:03.175 "reset": true, 00:23:03.175 "nvme_admin": false, 00:23:03.175 "nvme_io": false, 00:23:03.175 "nvme_io_md": false, 00:23:03.175 "write_zeroes": true, 00:23:03.175 "zcopy": true, 00:23:03.175 "get_zone_info": false, 00:23:03.175 "zone_management": false, 00:23:03.175 "zone_append": false, 00:23:03.175 "compare": false, 00:23:03.175 "compare_and_write": false, 00:23:03.175 "abort": true, 00:23:03.175 "seek_hole": false, 00:23:03.175 "seek_data": false, 00:23:03.175 "copy": true, 00:23:03.175 "nvme_iov_md": false 00:23:03.175 }, 00:23:03.175 "memory_domains": [ 00:23:03.175 { 00:23:03.175 "dma_device_id": "system", 00:23:03.175 "dma_device_type": 1 00:23:03.175 }, 00:23:03.175 { 00:23:03.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.175 "dma_device_type": 2 00:23:03.175 } 00:23:03.175 ], 00:23:03.175 "driver_specific": {} 00:23:03.175 } 00:23:03.175 ] 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.175 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.176 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.176 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.176 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.176 11:34:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.434 11:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.434 "name": "Existed_Raid", 00:23:03.434 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:03.434 "strip_size_kb": 64, 00:23:03.434 "state": "configuring", 00:23:03.434 "raid_level": "raid0", 00:23:03.434 "superblock": true, 00:23:03.434 "num_base_bdevs": 4, 00:23:03.434 "num_base_bdevs_discovered": 3, 00:23:03.434 "num_base_bdevs_operational": 4, 00:23:03.434 "base_bdevs_list": [ 00:23:03.434 { 00:23:03.434 "name": "BaseBdev1", 00:23:03.434 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:03.434 "is_configured": true, 00:23:03.434 "data_offset": 2048, 00:23:03.434 "data_size": 63488 00:23:03.434 }, 00:23:03.434 { 00:23:03.434 "name": null, 00:23:03.434 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:03.434 "is_configured": false, 00:23:03.434 "data_offset": 2048, 00:23:03.434 "data_size": 63488 00:23:03.434 }, 00:23:03.434 { 00:23:03.434 "name": "BaseBdev3", 00:23:03.434 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:03.434 "is_configured": true, 00:23:03.434 "data_offset": 2048, 00:23:03.434 "data_size": 63488 00:23:03.434 }, 00:23:03.434 { 00:23:03.434 "name": "BaseBdev4", 00:23:03.434 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:03.434 "is_configured": true, 00:23:03.434 "data_offset": 2048, 00:23:03.434 "data_size": 63488 00:23:03.434 } 00:23:03.434 ] 00:23:03.434 }' 00:23:03.434 11:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.434 11:34:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:04.370 11:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.370 11:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:04.370 11:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:04.370 11:34:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:04.629 [2024-07-13 11:34:39.174418] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.629 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:04.888 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:04.888 "name": "Existed_Raid", 00:23:04.888 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:04.888 "strip_size_kb": 64, 00:23:04.888 "state": "configuring", 00:23:04.888 "raid_level": "raid0", 00:23:04.888 "superblock": true, 00:23:04.888 "num_base_bdevs": 4, 00:23:04.888 "num_base_bdevs_discovered": 2, 00:23:04.888 "num_base_bdevs_operational": 4, 00:23:04.888 "base_bdevs_list": [ 00:23:04.888 { 00:23:04.888 "name": "BaseBdev1", 00:23:04.888 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:04.888 "is_configured": true, 00:23:04.888 "data_offset": 2048, 00:23:04.888 "data_size": 63488 00:23:04.888 }, 00:23:04.888 { 00:23:04.888 "name": null, 00:23:04.888 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:04.888 "is_configured": false, 00:23:04.888 "data_offset": 2048, 00:23:04.888 "data_size": 63488 00:23:04.888 }, 00:23:04.888 { 00:23:04.888 "name": null, 00:23:04.888 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:04.888 "is_configured": false, 00:23:04.888 "data_offset": 2048, 00:23:04.888 "data_size": 63488 00:23:04.888 }, 00:23:04.888 { 00:23:04.888 "name": "BaseBdev4", 00:23:04.888 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:04.888 "is_configured": true, 00:23:04.888 "data_offset": 2048, 00:23:04.888 "data_size": 63488 00:23:04.888 } 00:23:04.888 ] 00:23:04.888 }' 00:23:04.888 11:34:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:04.888 11:34:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:05.455 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:05.455 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.714 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:05.714 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:05.973 [2024-07-13 11:34:40.502713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:05.973 "name": "Existed_Raid", 00:23:05.973 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:05.973 "strip_size_kb": 64, 00:23:05.973 "state": "configuring", 00:23:05.973 "raid_level": "raid0", 00:23:05.973 "superblock": true, 00:23:05.973 "num_base_bdevs": 4, 00:23:05.973 "num_base_bdevs_discovered": 3, 00:23:05.973 "num_base_bdevs_operational": 4, 00:23:05.973 "base_bdevs_list": [ 00:23:05.973 { 00:23:05.973 "name": "BaseBdev1", 00:23:05.973 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:05.973 "is_configured": true, 00:23:05.973 "data_offset": 2048, 00:23:05.973 "data_size": 63488 00:23:05.973 }, 00:23:05.973 { 00:23:05.973 "name": null, 00:23:05.973 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:05.973 "is_configured": false, 00:23:05.973 "data_offset": 2048, 00:23:05.973 "data_size": 63488 00:23:05.973 }, 00:23:05.973 { 00:23:05.973 "name": "BaseBdev3", 00:23:05.973 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:05.973 "is_configured": true, 00:23:05.973 "data_offset": 2048, 00:23:05.973 "data_size": 63488 00:23:05.973 }, 00:23:05.973 { 00:23:05.973 "name": "BaseBdev4", 00:23:05.973 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:05.973 "is_configured": true, 00:23:05.973 "data_offset": 2048, 00:23:05.973 "data_size": 63488 00:23:05.973 } 00:23:05.973 ] 00:23:05.973 }' 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:05.973 11:34:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:06.909 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.909 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:06.909 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:06.909 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:07.167 [2024-07-13 11:34:41.851032] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.426 11:34:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.685 11:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.685 "name": "Existed_Raid", 00:23:07.685 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:07.685 "strip_size_kb": 64, 00:23:07.685 "state": "configuring", 00:23:07.685 "raid_level": "raid0", 00:23:07.685 "superblock": true, 00:23:07.685 "num_base_bdevs": 4, 00:23:07.685 "num_base_bdevs_discovered": 2, 00:23:07.685 "num_base_bdevs_operational": 4, 00:23:07.685 "base_bdevs_list": [ 00:23:07.685 { 00:23:07.685 "name": null, 00:23:07.685 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:07.685 "is_configured": false, 00:23:07.685 "data_offset": 2048, 00:23:07.685 "data_size": 63488 00:23:07.685 }, 00:23:07.685 { 00:23:07.685 "name": null, 00:23:07.685 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:07.685 "is_configured": false, 00:23:07.685 "data_offset": 2048, 00:23:07.685 "data_size": 63488 00:23:07.685 }, 00:23:07.685 { 00:23:07.685 "name": "BaseBdev3", 00:23:07.685 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:07.685 "is_configured": true, 00:23:07.685 "data_offset": 2048, 00:23:07.685 "data_size": 63488 00:23:07.685 }, 00:23:07.685 { 00:23:07.685 "name": "BaseBdev4", 00:23:07.685 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:07.685 "is_configured": true, 00:23:07.685 "data_offset": 2048, 00:23:07.685 "data_size": 63488 00:23:07.685 } 00:23:07.685 ] 00:23:07.685 }' 00:23:07.685 11:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.685 11:34:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:08.256 11:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.256 11:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:08.256 11:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:08.256 11:34:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:08.514 [2024-07-13 11:34:43.146485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.514 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.771 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.771 "name": "Existed_Raid", 00:23:08.771 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:08.771 "strip_size_kb": 64, 00:23:08.771 "state": "configuring", 00:23:08.771 "raid_level": "raid0", 00:23:08.771 "superblock": true, 00:23:08.771 "num_base_bdevs": 4, 00:23:08.771 "num_base_bdevs_discovered": 3, 00:23:08.771 "num_base_bdevs_operational": 4, 00:23:08.771 "base_bdevs_list": [ 00:23:08.771 { 00:23:08.771 "name": null, 00:23:08.771 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:08.771 "is_configured": false, 00:23:08.771 "data_offset": 2048, 00:23:08.771 "data_size": 63488 00:23:08.771 }, 00:23:08.771 { 00:23:08.771 "name": "BaseBdev2", 00:23:08.771 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:08.771 "is_configured": true, 00:23:08.771 "data_offset": 2048, 00:23:08.771 "data_size": 63488 00:23:08.771 }, 00:23:08.771 { 00:23:08.771 "name": "BaseBdev3", 00:23:08.771 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:08.771 "is_configured": true, 00:23:08.771 "data_offset": 2048, 00:23:08.771 "data_size": 63488 00:23:08.771 }, 00:23:08.771 { 00:23:08.771 "name": "BaseBdev4", 00:23:08.771 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:08.771 "is_configured": true, 00:23:08.771 "data_offset": 2048, 00:23:08.771 "data_size": 63488 00:23:08.771 } 00:23:08.771 ] 00:23:08.771 }' 00:23:08.771 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.771 11:34:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:09.337 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.337 11:34:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:09.596 11:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:09.596 11:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.596 11:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:09.854 11:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a19a7439-8471-4f3c-823e-6914fa440978 00:23:10.113 [2024-07-13 11:34:44.662835] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:10.113 [2024-07-13 11:34:44.663215] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:23:10.113 [2024-07-13 11:34:44.663346] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:10.113 NewBaseBdev 00:23:10.113 [2024-07-13 11:34:44.663473] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:10.113 [2024-07-13 11:34:44.663843] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:23:10.113 [2024-07-13 11:34:44.663857] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:23:10.113 [2024-07-13 11:34:44.663974] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.113 11:34:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:10.113 11:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:23:10.113 11:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:10.113 11:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:23:10.113 11:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:10.113 11:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:10.113 11:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:10.371 11:34:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:10.371 [ 00:23:10.371 { 00:23:10.371 "name": "NewBaseBdev", 00:23:10.371 "aliases": [ 00:23:10.371 "a19a7439-8471-4f3c-823e-6914fa440978" 00:23:10.371 ], 00:23:10.371 "product_name": "Malloc disk", 00:23:10.371 "block_size": 512, 00:23:10.371 "num_blocks": 65536, 00:23:10.371 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:10.371 "assigned_rate_limits": { 00:23:10.371 "rw_ios_per_sec": 0, 00:23:10.371 "rw_mbytes_per_sec": 0, 00:23:10.371 "r_mbytes_per_sec": 0, 00:23:10.371 "w_mbytes_per_sec": 0 00:23:10.371 }, 00:23:10.371 "claimed": true, 00:23:10.371 "claim_type": "exclusive_write", 00:23:10.371 "zoned": false, 00:23:10.371 "supported_io_types": { 00:23:10.371 "read": true, 00:23:10.371 "write": true, 00:23:10.371 "unmap": true, 00:23:10.371 "flush": true, 00:23:10.371 "reset": true, 00:23:10.371 "nvme_admin": false, 00:23:10.371 "nvme_io": false, 00:23:10.371 "nvme_io_md": false, 00:23:10.371 "write_zeroes": true, 00:23:10.371 "zcopy": true, 00:23:10.371 "get_zone_info": false, 00:23:10.371 "zone_management": false, 00:23:10.371 "zone_append": false, 00:23:10.371 "compare": false, 00:23:10.371 "compare_and_write": false, 00:23:10.371 "abort": true, 00:23:10.371 "seek_hole": false, 00:23:10.371 "seek_data": false, 00:23:10.371 "copy": true, 00:23:10.371 "nvme_iov_md": false 00:23:10.371 }, 00:23:10.371 "memory_domains": [ 00:23:10.371 { 00:23:10.371 "dma_device_id": "system", 00:23:10.371 "dma_device_type": 1 00:23:10.371 }, 00:23:10.371 { 00:23:10.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.371 "dma_device_type": 2 00:23:10.371 } 00:23:10.371 ], 00:23:10.371 "driver_specific": {} 00:23:10.371 } 00:23:10.371 ] 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.371 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:10.630 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.630 "name": "Existed_Raid", 00:23:10.630 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:10.630 "strip_size_kb": 64, 00:23:10.630 "state": "online", 00:23:10.630 "raid_level": "raid0", 00:23:10.630 "superblock": true, 00:23:10.630 "num_base_bdevs": 4, 00:23:10.630 "num_base_bdevs_discovered": 4, 00:23:10.630 "num_base_bdevs_operational": 4, 00:23:10.630 "base_bdevs_list": [ 00:23:10.630 { 00:23:10.630 "name": "NewBaseBdev", 00:23:10.630 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:10.630 "is_configured": true, 00:23:10.630 "data_offset": 2048, 00:23:10.630 "data_size": 63488 00:23:10.630 }, 00:23:10.630 { 00:23:10.630 "name": "BaseBdev2", 00:23:10.630 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:10.630 "is_configured": true, 00:23:10.630 "data_offset": 2048, 00:23:10.630 "data_size": 63488 00:23:10.630 }, 00:23:10.630 { 00:23:10.630 "name": "BaseBdev3", 00:23:10.630 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:10.630 "is_configured": true, 00:23:10.630 "data_offset": 2048, 00:23:10.630 "data_size": 63488 00:23:10.630 }, 00:23:10.630 { 00:23:10.630 "name": "BaseBdev4", 00:23:10.630 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:10.630 "is_configured": true, 00:23:10.630 "data_offset": 2048, 00:23:10.630 "data_size": 63488 00:23:10.630 } 00:23:10.630 ] 00:23:10.630 }' 00:23:10.630 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.630 11:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:11.565 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:11.565 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:11.565 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:11.565 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:11.565 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:11.565 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:11.565 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:11.565 11:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:11.565 [2024-07-13 11:34:46.239545] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.565 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:11.565 "name": "Existed_Raid", 00:23:11.565 "aliases": [ 00:23:11.565 "0e4e260a-e211-4246-a2c0-599a32a12924" 00:23:11.565 ], 00:23:11.565 "product_name": "Raid Volume", 00:23:11.565 "block_size": 512, 00:23:11.565 "num_blocks": 253952, 00:23:11.565 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:11.566 "assigned_rate_limits": { 00:23:11.566 "rw_ios_per_sec": 0, 00:23:11.566 "rw_mbytes_per_sec": 0, 00:23:11.566 "r_mbytes_per_sec": 0, 00:23:11.566 "w_mbytes_per_sec": 0 00:23:11.566 }, 00:23:11.566 "claimed": false, 00:23:11.566 "zoned": false, 00:23:11.566 "supported_io_types": { 00:23:11.566 "read": true, 00:23:11.566 "write": true, 00:23:11.566 "unmap": true, 00:23:11.566 "flush": true, 00:23:11.566 "reset": true, 00:23:11.566 "nvme_admin": false, 00:23:11.566 "nvme_io": false, 00:23:11.566 "nvme_io_md": false, 00:23:11.566 "write_zeroes": true, 00:23:11.566 "zcopy": false, 00:23:11.566 "get_zone_info": false, 00:23:11.566 "zone_management": false, 00:23:11.566 "zone_append": false, 00:23:11.566 "compare": false, 00:23:11.566 "compare_and_write": false, 00:23:11.566 "abort": false, 00:23:11.566 "seek_hole": false, 00:23:11.566 "seek_data": false, 00:23:11.566 "copy": false, 00:23:11.566 "nvme_iov_md": false 00:23:11.566 }, 00:23:11.566 "memory_domains": [ 00:23:11.566 { 00:23:11.566 "dma_device_id": "system", 00:23:11.566 "dma_device_type": 1 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.566 "dma_device_type": 2 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "dma_device_id": "system", 00:23:11.566 "dma_device_type": 1 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.566 "dma_device_type": 2 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "dma_device_id": "system", 00:23:11.566 "dma_device_type": 1 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.566 "dma_device_type": 2 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "dma_device_id": "system", 00:23:11.566 "dma_device_type": 1 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.566 "dma_device_type": 2 00:23:11.566 } 00:23:11.566 ], 00:23:11.566 "driver_specific": { 00:23:11.566 "raid": { 00:23:11.566 "uuid": "0e4e260a-e211-4246-a2c0-599a32a12924", 00:23:11.566 "strip_size_kb": 64, 00:23:11.566 "state": "online", 00:23:11.566 "raid_level": "raid0", 00:23:11.566 "superblock": true, 00:23:11.566 "num_base_bdevs": 4, 00:23:11.566 "num_base_bdevs_discovered": 4, 00:23:11.566 "num_base_bdevs_operational": 4, 00:23:11.566 "base_bdevs_list": [ 00:23:11.566 { 00:23:11.566 "name": "NewBaseBdev", 00:23:11.566 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:11.566 "is_configured": true, 00:23:11.566 "data_offset": 2048, 00:23:11.566 "data_size": 63488 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "name": "BaseBdev2", 00:23:11.566 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:11.566 "is_configured": true, 00:23:11.566 "data_offset": 2048, 00:23:11.566 "data_size": 63488 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "name": "BaseBdev3", 00:23:11.566 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:11.566 "is_configured": true, 00:23:11.566 "data_offset": 2048, 00:23:11.566 "data_size": 63488 00:23:11.566 }, 00:23:11.566 { 00:23:11.566 "name": "BaseBdev4", 00:23:11.566 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:11.566 "is_configured": true, 00:23:11.566 "data_offset": 2048, 00:23:11.566 "data_size": 63488 00:23:11.566 } 00:23:11.566 ] 00:23:11.566 } 00:23:11.566 } 00:23:11.566 }' 00:23:11.566 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:11.566 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:11.566 BaseBdev2 00:23:11.566 BaseBdev3 00:23:11.566 BaseBdev4' 00:23:11.566 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:11.566 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:11.566 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:12.132 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.132 "name": "NewBaseBdev", 00:23:12.132 "aliases": [ 00:23:12.132 "a19a7439-8471-4f3c-823e-6914fa440978" 00:23:12.132 ], 00:23:12.132 "product_name": "Malloc disk", 00:23:12.132 "block_size": 512, 00:23:12.132 "num_blocks": 65536, 00:23:12.132 "uuid": "a19a7439-8471-4f3c-823e-6914fa440978", 00:23:12.132 "assigned_rate_limits": { 00:23:12.132 "rw_ios_per_sec": 0, 00:23:12.132 "rw_mbytes_per_sec": 0, 00:23:12.132 "r_mbytes_per_sec": 0, 00:23:12.132 "w_mbytes_per_sec": 0 00:23:12.132 }, 00:23:12.132 "claimed": true, 00:23:12.132 "claim_type": "exclusive_write", 00:23:12.132 "zoned": false, 00:23:12.132 "supported_io_types": { 00:23:12.132 "read": true, 00:23:12.132 "write": true, 00:23:12.132 "unmap": true, 00:23:12.132 "flush": true, 00:23:12.132 "reset": true, 00:23:12.132 "nvme_admin": false, 00:23:12.132 "nvme_io": false, 00:23:12.132 "nvme_io_md": false, 00:23:12.132 "write_zeroes": true, 00:23:12.132 "zcopy": true, 00:23:12.132 "get_zone_info": false, 00:23:12.132 "zone_management": false, 00:23:12.132 "zone_append": false, 00:23:12.132 "compare": false, 00:23:12.132 "compare_and_write": false, 00:23:12.132 "abort": true, 00:23:12.132 "seek_hole": false, 00:23:12.132 "seek_data": false, 00:23:12.132 "copy": true, 00:23:12.132 "nvme_iov_md": false 00:23:12.133 }, 00:23:12.133 "memory_domains": [ 00:23:12.133 { 00:23:12.133 "dma_device_id": "system", 00:23:12.133 "dma_device_type": 1 00:23:12.133 }, 00:23:12.133 { 00:23:12.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.133 "dma_device_type": 2 00:23:12.133 } 00:23:12.133 ], 00:23:12.133 "driver_specific": {} 00:23:12.133 }' 00:23:12.133 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.133 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.133 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:12.133 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.133 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.133 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:12.133 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.133 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.391 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:12.391 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.391 11:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.391 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:12.391 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:12.391 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:12.391 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:12.650 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.650 "name": "BaseBdev2", 00:23:12.650 "aliases": [ 00:23:12.650 "fc685573-1656-4fdf-a157-c6856f9461d8" 00:23:12.650 ], 00:23:12.650 "product_name": "Malloc disk", 00:23:12.650 "block_size": 512, 00:23:12.650 "num_blocks": 65536, 00:23:12.650 "uuid": "fc685573-1656-4fdf-a157-c6856f9461d8", 00:23:12.650 "assigned_rate_limits": { 00:23:12.650 "rw_ios_per_sec": 0, 00:23:12.650 "rw_mbytes_per_sec": 0, 00:23:12.650 "r_mbytes_per_sec": 0, 00:23:12.650 "w_mbytes_per_sec": 0 00:23:12.650 }, 00:23:12.650 "claimed": true, 00:23:12.650 "claim_type": "exclusive_write", 00:23:12.650 "zoned": false, 00:23:12.650 "supported_io_types": { 00:23:12.650 "read": true, 00:23:12.650 "write": true, 00:23:12.650 "unmap": true, 00:23:12.650 "flush": true, 00:23:12.650 "reset": true, 00:23:12.650 "nvme_admin": false, 00:23:12.650 "nvme_io": false, 00:23:12.650 "nvme_io_md": false, 00:23:12.650 "write_zeroes": true, 00:23:12.650 "zcopy": true, 00:23:12.650 "get_zone_info": false, 00:23:12.650 "zone_management": false, 00:23:12.650 "zone_append": false, 00:23:12.650 "compare": false, 00:23:12.650 "compare_and_write": false, 00:23:12.650 "abort": true, 00:23:12.650 "seek_hole": false, 00:23:12.650 "seek_data": false, 00:23:12.650 "copy": true, 00:23:12.650 "nvme_iov_md": false 00:23:12.650 }, 00:23:12.650 "memory_domains": [ 00:23:12.650 { 00:23:12.650 "dma_device_id": "system", 00:23:12.650 "dma_device_type": 1 00:23:12.650 }, 00:23:12.650 { 00:23:12.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.650 "dma_device_type": 2 00:23:12.650 } 00:23:12.650 ], 00:23:12.650 "driver_specific": {} 00:23:12.650 }' 00:23:12.650 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.650 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.650 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:12.650 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.650 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:12.908 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:13.167 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:13.167 "name": "BaseBdev3", 00:23:13.167 "aliases": [ 00:23:13.167 "8deadec4-1e07-4e54-92ab-02adc660cecc" 00:23:13.167 ], 00:23:13.167 "product_name": "Malloc disk", 00:23:13.167 "block_size": 512, 00:23:13.167 "num_blocks": 65536, 00:23:13.167 "uuid": "8deadec4-1e07-4e54-92ab-02adc660cecc", 00:23:13.167 "assigned_rate_limits": { 00:23:13.167 "rw_ios_per_sec": 0, 00:23:13.167 "rw_mbytes_per_sec": 0, 00:23:13.167 "r_mbytes_per_sec": 0, 00:23:13.167 "w_mbytes_per_sec": 0 00:23:13.167 }, 00:23:13.167 "claimed": true, 00:23:13.167 "claim_type": "exclusive_write", 00:23:13.167 "zoned": false, 00:23:13.167 "supported_io_types": { 00:23:13.167 "read": true, 00:23:13.167 "write": true, 00:23:13.167 "unmap": true, 00:23:13.167 "flush": true, 00:23:13.167 "reset": true, 00:23:13.167 "nvme_admin": false, 00:23:13.167 "nvme_io": false, 00:23:13.167 "nvme_io_md": false, 00:23:13.167 "write_zeroes": true, 00:23:13.167 "zcopy": true, 00:23:13.167 "get_zone_info": false, 00:23:13.167 "zone_management": false, 00:23:13.167 "zone_append": false, 00:23:13.167 "compare": false, 00:23:13.167 "compare_and_write": false, 00:23:13.167 "abort": true, 00:23:13.167 "seek_hole": false, 00:23:13.167 "seek_data": false, 00:23:13.167 "copy": true, 00:23:13.167 "nvme_iov_md": false 00:23:13.167 }, 00:23:13.167 "memory_domains": [ 00:23:13.167 { 00:23:13.167 "dma_device_id": "system", 00:23:13.167 "dma_device_type": 1 00:23:13.167 }, 00:23:13.167 { 00:23:13.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.167 "dma_device_type": 2 00:23:13.167 } 00:23:13.167 ], 00:23:13.167 "driver_specific": {} 00:23:13.167 }' 00:23:13.167 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.426 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.426 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:13.426 11:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.426 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.426 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:13.426 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.426 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.683 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:13.683 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.683 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.683 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:13.683 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:13.683 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:13.683 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:13.941 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:13.941 "name": "BaseBdev4", 00:23:13.941 "aliases": [ 00:23:13.941 "241b89b1-811f-4f4b-8b65-38d0f8b71d5b" 00:23:13.941 ], 00:23:13.941 "product_name": "Malloc disk", 00:23:13.941 "block_size": 512, 00:23:13.941 "num_blocks": 65536, 00:23:13.941 "uuid": "241b89b1-811f-4f4b-8b65-38d0f8b71d5b", 00:23:13.941 "assigned_rate_limits": { 00:23:13.941 "rw_ios_per_sec": 0, 00:23:13.941 "rw_mbytes_per_sec": 0, 00:23:13.941 "r_mbytes_per_sec": 0, 00:23:13.941 "w_mbytes_per_sec": 0 00:23:13.941 }, 00:23:13.941 "claimed": true, 00:23:13.941 "claim_type": "exclusive_write", 00:23:13.941 "zoned": false, 00:23:13.941 "supported_io_types": { 00:23:13.941 "read": true, 00:23:13.941 "write": true, 00:23:13.941 "unmap": true, 00:23:13.941 "flush": true, 00:23:13.941 "reset": true, 00:23:13.941 "nvme_admin": false, 00:23:13.941 "nvme_io": false, 00:23:13.941 "nvme_io_md": false, 00:23:13.941 "write_zeroes": true, 00:23:13.941 "zcopy": true, 00:23:13.941 "get_zone_info": false, 00:23:13.942 "zone_management": false, 00:23:13.942 "zone_append": false, 00:23:13.942 "compare": false, 00:23:13.942 "compare_and_write": false, 00:23:13.942 "abort": true, 00:23:13.942 "seek_hole": false, 00:23:13.942 "seek_data": false, 00:23:13.942 "copy": true, 00:23:13.942 "nvme_iov_md": false 00:23:13.942 }, 00:23:13.942 "memory_domains": [ 00:23:13.942 { 00:23:13.942 "dma_device_id": "system", 00:23:13.942 "dma_device_type": 1 00:23:13.942 }, 00:23:13.942 { 00:23:13.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.942 "dma_device_type": 2 00:23:13.942 } 00:23:13.942 ], 00:23:13.942 "driver_specific": {} 00:23:13.942 }' 00:23:13.942 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.942 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.200 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:14.200 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.200 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:14.200 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:14.200 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.200 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:14.200 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:14.200 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.458 11:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:14.458 11:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:14.458 11:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:14.716 [2024-07-13 11:34:49.251788] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:14.716 [2024-07-13 11:34:49.251923] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:14.716 [2024-07-13 11:34:49.252063] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:14.716 [2024-07-13 11:34:49.252221] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:14.716 [2024-07-13 11:34:49.252317] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 135611 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 135611 ']' 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 135611 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 135611 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:14.716 killing process with pid 135611 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 135611' 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 135611 00:23:14.716 11:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 135611 00:23:14.716 [2024-07-13 11:34:49.285266] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:14.974 [2024-07-13 11:34:49.534484] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:15.908 ************************************ 00:23:15.908 END TEST raid_state_function_test_sb 00:23:15.908 ************************************ 00:23:15.908 11:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:15.908 00:23:15.908 real 0m34.212s 00:23:15.908 user 1m4.701s 00:23:15.908 sys 0m3.494s 00:23:15.908 11:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.908 11:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:15.908 11:34:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:15.908 11:34:50 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:23:15.908 11:34:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:23:15.908 11:34:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:15.908 11:34:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:15.908 ************************************ 00:23:15.908 START TEST raid_superblock_test 00:23:15.908 ************************************ 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=136763 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 136763 /var/tmp/spdk-raid.sock 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 136763 ']' 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:15.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.908 11:34:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.908 [2024-07-13 11:34:50.582511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:15.908 [2024-07-13 11:34:50.582951] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136763 ] 00:23:16.167 [2024-07-13 11:34:50.751831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.426 [2024-07-13 11:34:51.006103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.698 [2024-07-13 11:34:51.200251] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:16.698 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:16.969 malloc1 00:23:16.969 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:17.227 [2024-07-13 11:34:51.864552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:17.227 [2024-07-13 11:34:51.864849] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.227 [2024-07-13 11:34:51.864921] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:17.227 [2024-07-13 11:34:51.865059] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.227 [2024-07-13 11:34:51.866980] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.227 [2024-07-13 11:34:51.867141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:17.227 pt1 00:23:17.227 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:17.227 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:17.227 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:17.227 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:17.227 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:17.227 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:17.227 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:17.227 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:17.227 11:34:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:17.485 malloc2 00:23:17.485 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:17.744 [2024-07-13 11:34:52.405207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:17.744 [2024-07-13 11:34:52.405452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.744 [2024-07-13 11:34:52.405518] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:23:17.744 [2024-07-13 11:34:52.405813] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.744 [2024-07-13 11:34:52.408326] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.744 [2024-07-13 11:34:52.408489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:17.744 pt2 00:23:17.744 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:17.744 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:17.744 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:23:17.744 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:23:17.744 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:17.744 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:17.744 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:17.744 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:17.744 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:18.003 malloc3 00:23:18.003 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:18.262 [2024-07-13 11:34:52.873565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:18.262 [2024-07-13 11:34:52.873790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.262 [2024-07-13 11:34:52.873853] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:23:18.262 [2024-07-13 11:34:52.873973] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.262 [2024-07-13 11:34:52.876166] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.263 [2024-07-13 11:34:52.876332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:18.263 pt3 00:23:18.263 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:18.263 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:18.263 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:23:18.263 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:23:18.263 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:18.263 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:18.263 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:18.263 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:18.263 11:34:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:18.521 malloc4 00:23:18.521 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:18.778 [2024-07-13 11:34:53.285992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:18.778 [2024-07-13 11:34:53.286220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.779 [2024-07-13 11:34:53.286284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:23:18.779 [2024-07-13 11:34:53.286542] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.779 [2024-07-13 11:34:53.288535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.779 [2024-07-13 11:34:53.288689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:18.779 pt4 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:18.779 [2024-07-13 11:34:53.474054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:18.779 [2024-07-13 11:34:53.475997] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:18.779 [2024-07-13 11:34:53.476181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:18.779 [2024-07-13 11:34:53.476272] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:18.779 [2024-07-13 11:34:53.476620] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:23:18.779 [2024-07-13 11:34:53.476781] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:18.779 [2024-07-13 11:34:53.476958] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:18.779 [2024-07-13 11:34:53.477343] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:23:18.779 [2024-07-13 11:34:53.477461] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:23:18.779 [2024-07-13 11:34:53.477691] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.779 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.037 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.037 "name": "raid_bdev1", 00:23:19.037 "uuid": "9f9906bc-43ba-4182-a531-1156930ea57c", 00:23:19.037 "strip_size_kb": 64, 00:23:19.037 "state": "online", 00:23:19.037 "raid_level": "raid0", 00:23:19.037 "superblock": true, 00:23:19.037 "num_base_bdevs": 4, 00:23:19.037 "num_base_bdevs_discovered": 4, 00:23:19.037 "num_base_bdevs_operational": 4, 00:23:19.037 "base_bdevs_list": [ 00:23:19.037 { 00:23:19.037 "name": "pt1", 00:23:19.037 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:19.037 "is_configured": true, 00:23:19.037 "data_offset": 2048, 00:23:19.037 "data_size": 63488 00:23:19.037 }, 00:23:19.037 { 00:23:19.037 "name": "pt2", 00:23:19.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.037 "is_configured": true, 00:23:19.037 "data_offset": 2048, 00:23:19.037 "data_size": 63488 00:23:19.037 }, 00:23:19.037 { 00:23:19.037 "name": "pt3", 00:23:19.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:19.037 "is_configured": true, 00:23:19.037 "data_offset": 2048, 00:23:19.037 "data_size": 63488 00:23:19.037 }, 00:23:19.037 { 00:23:19.037 "name": "pt4", 00:23:19.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:19.037 "is_configured": true, 00:23:19.037 "data_offset": 2048, 00:23:19.037 "data_size": 63488 00:23:19.037 } 00:23:19.037 ] 00:23:19.037 }' 00:23:19.037 11:34:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.037 11:34:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:19.970 [2024-07-13 11:34:54.566444] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:19.970 "name": "raid_bdev1", 00:23:19.970 "aliases": [ 00:23:19.970 "9f9906bc-43ba-4182-a531-1156930ea57c" 00:23:19.970 ], 00:23:19.970 "product_name": "Raid Volume", 00:23:19.970 "block_size": 512, 00:23:19.970 "num_blocks": 253952, 00:23:19.970 "uuid": "9f9906bc-43ba-4182-a531-1156930ea57c", 00:23:19.970 "assigned_rate_limits": { 00:23:19.970 "rw_ios_per_sec": 0, 00:23:19.970 "rw_mbytes_per_sec": 0, 00:23:19.970 "r_mbytes_per_sec": 0, 00:23:19.970 "w_mbytes_per_sec": 0 00:23:19.970 }, 00:23:19.970 "claimed": false, 00:23:19.970 "zoned": false, 00:23:19.970 "supported_io_types": { 00:23:19.970 "read": true, 00:23:19.970 "write": true, 00:23:19.970 "unmap": true, 00:23:19.970 "flush": true, 00:23:19.970 "reset": true, 00:23:19.970 "nvme_admin": false, 00:23:19.970 "nvme_io": false, 00:23:19.970 "nvme_io_md": false, 00:23:19.970 "write_zeroes": true, 00:23:19.970 "zcopy": false, 00:23:19.970 "get_zone_info": false, 00:23:19.970 "zone_management": false, 00:23:19.970 "zone_append": false, 00:23:19.970 "compare": false, 00:23:19.970 "compare_and_write": false, 00:23:19.970 "abort": false, 00:23:19.970 "seek_hole": false, 00:23:19.970 "seek_data": false, 00:23:19.970 "copy": false, 00:23:19.970 "nvme_iov_md": false 00:23:19.970 }, 00:23:19.970 "memory_domains": [ 00:23:19.970 { 00:23:19.970 "dma_device_id": "system", 00:23:19.970 "dma_device_type": 1 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.970 "dma_device_type": 2 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "dma_device_id": "system", 00:23:19.970 "dma_device_type": 1 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.970 "dma_device_type": 2 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "dma_device_id": "system", 00:23:19.970 "dma_device_type": 1 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.970 "dma_device_type": 2 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "dma_device_id": "system", 00:23:19.970 "dma_device_type": 1 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:19.970 "dma_device_type": 2 00:23:19.970 } 00:23:19.970 ], 00:23:19.970 "driver_specific": { 00:23:19.970 "raid": { 00:23:19.970 "uuid": "9f9906bc-43ba-4182-a531-1156930ea57c", 00:23:19.970 "strip_size_kb": 64, 00:23:19.970 "state": "online", 00:23:19.970 "raid_level": "raid0", 00:23:19.970 "superblock": true, 00:23:19.970 "num_base_bdevs": 4, 00:23:19.970 "num_base_bdevs_discovered": 4, 00:23:19.970 "num_base_bdevs_operational": 4, 00:23:19.970 "base_bdevs_list": [ 00:23:19.970 { 00:23:19.970 "name": "pt1", 00:23:19.970 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:19.970 "is_configured": true, 00:23:19.970 "data_offset": 2048, 00:23:19.970 "data_size": 63488 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "name": "pt2", 00:23:19.970 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:19.970 "is_configured": true, 00:23:19.970 "data_offset": 2048, 00:23:19.970 "data_size": 63488 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "name": "pt3", 00:23:19.970 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:19.970 "is_configured": true, 00:23:19.970 "data_offset": 2048, 00:23:19.970 "data_size": 63488 00:23:19.970 }, 00:23:19.970 { 00:23:19.970 "name": "pt4", 00:23:19.970 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:19.970 "is_configured": true, 00:23:19.970 "data_offset": 2048, 00:23:19.970 "data_size": 63488 00:23:19.970 } 00:23:19.970 ] 00:23:19.970 } 00:23:19.970 } 00:23:19.970 }' 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:19.970 pt2 00:23:19.970 pt3 00:23:19.970 pt4' 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:19.970 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:20.229 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:20.229 "name": "pt1", 00:23:20.229 "aliases": [ 00:23:20.229 "00000000-0000-0000-0000-000000000001" 00:23:20.229 ], 00:23:20.229 "product_name": "passthru", 00:23:20.229 "block_size": 512, 00:23:20.229 "num_blocks": 65536, 00:23:20.229 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:20.229 "assigned_rate_limits": { 00:23:20.229 "rw_ios_per_sec": 0, 00:23:20.229 "rw_mbytes_per_sec": 0, 00:23:20.229 "r_mbytes_per_sec": 0, 00:23:20.229 "w_mbytes_per_sec": 0 00:23:20.229 }, 00:23:20.229 "claimed": true, 00:23:20.229 "claim_type": "exclusive_write", 00:23:20.229 "zoned": false, 00:23:20.229 "supported_io_types": { 00:23:20.229 "read": true, 00:23:20.229 "write": true, 00:23:20.229 "unmap": true, 00:23:20.229 "flush": true, 00:23:20.229 "reset": true, 00:23:20.229 "nvme_admin": false, 00:23:20.229 "nvme_io": false, 00:23:20.229 "nvme_io_md": false, 00:23:20.229 "write_zeroes": true, 00:23:20.229 "zcopy": true, 00:23:20.229 "get_zone_info": false, 00:23:20.229 "zone_management": false, 00:23:20.229 "zone_append": false, 00:23:20.229 "compare": false, 00:23:20.229 "compare_and_write": false, 00:23:20.229 "abort": true, 00:23:20.229 "seek_hole": false, 00:23:20.229 "seek_data": false, 00:23:20.229 "copy": true, 00:23:20.229 "nvme_iov_md": false 00:23:20.229 }, 00:23:20.229 "memory_domains": [ 00:23:20.229 { 00:23:20.229 "dma_device_id": "system", 00:23:20.229 "dma_device_type": 1 00:23:20.229 }, 00:23:20.229 { 00:23:20.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.229 "dma_device_type": 2 00:23:20.229 } 00:23:20.229 ], 00:23:20.229 "driver_specific": { 00:23:20.229 "passthru": { 00:23:20.229 "name": "pt1", 00:23:20.229 "base_bdev_name": "malloc1" 00:23:20.229 } 00:23:20.229 } 00:23:20.229 }' 00:23:20.229 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:20.229 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:20.229 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:20.229 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:20.487 11:34:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:20.487 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:20.487 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:20.487 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:20.487 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:20.487 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:20.487 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:20.746 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:20.746 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:20.746 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:20.746 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:20.746 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:20.746 "name": "pt2", 00:23:20.746 "aliases": [ 00:23:20.746 "00000000-0000-0000-0000-000000000002" 00:23:20.746 ], 00:23:20.746 "product_name": "passthru", 00:23:20.746 "block_size": 512, 00:23:20.746 "num_blocks": 65536, 00:23:20.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:20.746 "assigned_rate_limits": { 00:23:20.746 "rw_ios_per_sec": 0, 00:23:20.746 "rw_mbytes_per_sec": 0, 00:23:20.746 "r_mbytes_per_sec": 0, 00:23:20.746 "w_mbytes_per_sec": 0 00:23:20.746 }, 00:23:20.746 "claimed": true, 00:23:20.746 "claim_type": "exclusive_write", 00:23:20.746 "zoned": false, 00:23:20.746 "supported_io_types": { 00:23:20.746 "read": true, 00:23:20.746 "write": true, 00:23:20.746 "unmap": true, 00:23:20.746 "flush": true, 00:23:20.746 "reset": true, 00:23:20.746 "nvme_admin": false, 00:23:20.746 "nvme_io": false, 00:23:20.746 "nvme_io_md": false, 00:23:20.746 "write_zeroes": true, 00:23:20.746 "zcopy": true, 00:23:20.746 "get_zone_info": false, 00:23:20.746 "zone_management": false, 00:23:20.746 "zone_append": false, 00:23:20.746 "compare": false, 00:23:20.746 "compare_and_write": false, 00:23:20.746 "abort": true, 00:23:20.746 "seek_hole": false, 00:23:20.746 "seek_data": false, 00:23:20.746 "copy": true, 00:23:20.746 "nvme_iov_md": false 00:23:20.746 }, 00:23:20.746 "memory_domains": [ 00:23:20.746 { 00:23:20.746 "dma_device_id": "system", 00:23:20.746 "dma_device_type": 1 00:23:20.746 }, 00:23:20.746 { 00:23:20.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:20.746 "dma_device_type": 2 00:23:20.746 } 00:23:20.746 ], 00:23:20.746 "driver_specific": { 00:23:20.746 "passthru": { 00:23:20.746 "name": "pt2", 00:23:20.746 "base_bdev_name": "malloc2" 00:23:20.746 } 00:23:20.746 } 00:23:20.746 }' 00:23:20.746 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.004 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.004 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:21.004 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.004 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.004 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:21.004 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.004 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.004 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:21.005 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.263 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.263 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:21.263 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:21.263 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:21.263 11:34:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:21.522 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:21.522 "name": "pt3", 00:23:21.522 "aliases": [ 00:23:21.522 "00000000-0000-0000-0000-000000000003" 00:23:21.522 ], 00:23:21.522 "product_name": "passthru", 00:23:21.522 "block_size": 512, 00:23:21.522 "num_blocks": 65536, 00:23:21.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:21.522 "assigned_rate_limits": { 00:23:21.522 "rw_ios_per_sec": 0, 00:23:21.522 "rw_mbytes_per_sec": 0, 00:23:21.522 "r_mbytes_per_sec": 0, 00:23:21.522 "w_mbytes_per_sec": 0 00:23:21.522 }, 00:23:21.522 "claimed": true, 00:23:21.522 "claim_type": "exclusive_write", 00:23:21.522 "zoned": false, 00:23:21.522 "supported_io_types": { 00:23:21.522 "read": true, 00:23:21.522 "write": true, 00:23:21.522 "unmap": true, 00:23:21.522 "flush": true, 00:23:21.522 "reset": true, 00:23:21.522 "nvme_admin": false, 00:23:21.522 "nvme_io": false, 00:23:21.522 "nvme_io_md": false, 00:23:21.522 "write_zeroes": true, 00:23:21.522 "zcopy": true, 00:23:21.522 "get_zone_info": false, 00:23:21.522 "zone_management": false, 00:23:21.522 "zone_append": false, 00:23:21.522 "compare": false, 00:23:21.522 "compare_and_write": false, 00:23:21.522 "abort": true, 00:23:21.522 "seek_hole": false, 00:23:21.522 "seek_data": false, 00:23:21.522 "copy": true, 00:23:21.522 "nvme_iov_md": false 00:23:21.522 }, 00:23:21.522 "memory_domains": [ 00:23:21.522 { 00:23:21.522 "dma_device_id": "system", 00:23:21.522 "dma_device_type": 1 00:23:21.522 }, 00:23:21.522 { 00:23:21.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.522 "dma_device_type": 2 00:23:21.522 } 00:23:21.522 ], 00:23:21.522 "driver_specific": { 00:23:21.522 "passthru": { 00:23:21.522 "name": "pt3", 00:23:21.522 "base_bdev_name": "malloc3" 00:23:21.522 } 00:23:21.522 } 00:23:21.522 }' 00:23:21.522 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.522 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:21.522 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:21.522 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.522 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:21.522 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:21.522 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.781 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:21.781 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:21.781 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.781 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:21.781 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:21.781 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:21.781 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:21.781 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:22.040 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:22.040 "name": "pt4", 00:23:22.040 "aliases": [ 00:23:22.040 "00000000-0000-0000-0000-000000000004" 00:23:22.040 ], 00:23:22.040 "product_name": "passthru", 00:23:22.040 "block_size": 512, 00:23:22.040 "num_blocks": 65536, 00:23:22.040 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:22.040 "assigned_rate_limits": { 00:23:22.040 "rw_ios_per_sec": 0, 00:23:22.040 "rw_mbytes_per_sec": 0, 00:23:22.040 "r_mbytes_per_sec": 0, 00:23:22.040 "w_mbytes_per_sec": 0 00:23:22.040 }, 00:23:22.040 "claimed": true, 00:23:22.040 "claim_type": "exclusive_write", 00:23:22.040 "zoned": false, 00:23:22.040 "supported_io_types": { 00:23:22.040 "read": true, 00:23:22.040 "write": true, 00:23:22.040 "unmap": true, 00:23:22.040 "flush": true, 00:23:22.040 "reset": true, 00:23:22.040 "nvme_admin": false, 00:23:22.040 "nvme_io": false, 00:23:22.040 "nvme_io_md": false, 00:23:22.040 "write_zeroes": true, 00:23:22.040 "zcopy": true, 00:23:22.040 "get_zone_info": false, 00:23:22.040 "zone_management": false, 00:23:22.040 "zone_append": false, 00:23:22.040 "compare": false, 00:23:22.040 "compare_and_write": false, 00:23:22.040 "abort": true, 00:23:22.040 "seek_hole": false, 00:23:22.040 "seek_data": false, 00:23:22.040 "copy": true, 00:23:22.040 "nvme_iov_md": false 00:23:22.040 }, 00:23:22.040 "memory_domains": [ 00:23:22.040 { 00:23:22.040 "dma_device_id": "system", 00:23:22.040 "dma_device_type": 1 00:23:22.040 }, 00:23:22.040 { 00:23:22.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:22.040 "dma_device_type": 2 00:23:22.040 } 00:23:22.040 ], 00:23:22.040 "driver_specific": { 00:23:22.040 "passthru": { 00:23:22.040 "name": "pt4", 00:23:22.040 "base_bdev_name": "malloc4" 00:23:22.040 } 00:23:22.040 } 00:23:22.040 }' 00:23:22.040 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:22.040 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:22.040 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:22.040 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:22.299 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:22.299 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:22.299 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:22.299 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:22.299 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:22.299 11:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:22.299 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:22.557 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:22.557 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:22.557 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:22.815 [2024-07-13 11:34:57.358975] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:22.815 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9f9906bc-43ba-4182-a531-1156930ea57c 00:23:22.815 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9f9906bc-43ba-4182-a531-1156930ea57c ']' 00:23:22.815 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:23.073 [2024-07-13 11:34:57.642762] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.073 [2024-07-13 11:34:57.642907] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.073 [2024-07-13 11:34:57.643116] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.073 [2024-07-13 11:34:57.643295] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.073 [2024-07-13 11:34:57.643400] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:23:23.073 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.073 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:23.332 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:23.332 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:23.332 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.332 11:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:23.332 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.332 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:23.591 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.591 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:23.850 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.850 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:24.108 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:24.108 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:24.367 11:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:24.367 [2024-07-13 11:34:59.106998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:24.367 [2024-07-13 11:34:59.109079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:24.367 [2024-07-13 11:34:59.109273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:24.367 [2024-07-13 11:34:59.109367] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:24.367 [2024-07-13 11:34:59.109527] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:24.367 [2024-07-13 11:34:59.109790] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:24.367 [2024-07-13 11:34:59.109947] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:24.367 [2024-07-13 11:34:59.110116] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:24.367 [2024-07-13 11:34:59.110189] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:24.367 [2024-07-13 11:34:59.110268] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:23:24.367 request: 00:23:24.367 { 00:23:24.367 "name": "raid_bdev1", 00:23:24.367 "raid_level": "raid0", 00:23:24.367 "base_bdevs": [ 00:23:24.367 "malloc1", 00:23:24.367 "malloc2", 00:23:24.367 "malloc3", 00:23:24.367 "malloc4" 00:23:24.367 ], 00:23:24.367 "strip_size_kb": 64, 00:23:24.367 "superblock": false, 00:23:24.367 "method": "bdev_raid_create", 00:23:24.367 "req_id": 1 00:23:24.367 } 00:23:24.367 Got JSON-RPC error response 00:23:24.367 response: 00:23:24.367 { 00:23:24.367 "code": -17, 00:23:24.367 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:24.367 } 00:23:24.626 11:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:23:24.626 11:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:24.626 11:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:24.626 11:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:24.626 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.626 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:24.626 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:24.626 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:24.626 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:24.884 [2024-07-13 11:34:59.535029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:24.884 [2024-07-13 11:34:59.535206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.884 [2024-07-13 11:34:59.535266] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:24.884 [2024-07-13 11:34:59.535543] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.884 [2024-07-13 11:34:59.537894] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.884 [2024-07-13 11:34:59.538050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:24.884 [2024-07-13 11:34:59.538224] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:24.884 [2024-07-13 11:34:59.538367] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:24.884 pt1 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.884 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.142 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:25.142 "name": "raid_bdev1", 00:23:25.142 "uuid": "9f9906bc-43ba-4182-a531-1156930ea57c", 00:23:25.142 "strip_size_kb": 64, 00:23:25.142 "state": "configuring", 00:23:25.142 "raid_level": "raid0", 00:23:25.142 "superblock": true, 00:23:25.142 "num_base_bdevs": 4, 00:23:25.142 "num_base_bdevs_discovered": 1, 00:23:25.142 "num_base_bdevs_operational": 4, 00:23:25.142 "base_bdevs_list": [ 00:23:25.142 { 00:23:25.142 "name": "pt1", 00:23:25.142 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:25.142 "is_configured": true, 00:23:25.142 "data_offset": 2048, 00:23:25.142 "data_size": 63488 00:23:25.142 }, 00:23:25.142 { 00:23:25.142 "name": null, 00:23:25.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:25.142 "is_configured": false, 00:23:25.142 "data_offset": 2048, 00:23:25.142 "data_size": 63488 00:23:25.142 }, 00:23:25.142 { 00:23:25.142 "name": null, 00:23:25.142 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:25.142 "is_configured": false, 00:23:25.142 "data_offset": 2048, 00:23:25.142 "data_size": 63488 00:23:25.142 }, 00:23:25.142 { 00:23:25.142 "name": null, 00:23:25.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:25.142 "is_configured": false, 00:23:25.142 "data_offset": 2048, 00:23:25.142 "data_size": 63488 00:23:25.142 } 00:23:25.142 ] 00:23:25.142 }' 00:23:25.142 11:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:25.142 11:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.709 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:23:25.709 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:25.966 [2024-07-13 11:35:00.623832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:25.966 [2024-07-13 11:35:00.624299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.966 [2024-07-13 11:35:00.624572] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:25.966 [2024-07-13 11:35:00.624808] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.966 [2024-07-13 11:35:00.625503] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.966 [2024-07-13 11:35:00.625732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:25.966 [2024-07-13 11:35:00.626045] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:25.966 [2024-07-13 11:35:00.626207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:25.966 pt2 00:23:25.966 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:26.224 [2024-07-13 11:35:00.815841] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:26.224 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:23:26.224 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:26.224 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:26.224 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:26.224 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:26.224 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:26.225 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.225 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.225 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.225 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.225 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.225 11:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.482 11:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.482 "name": "raid_bdev1", 00:23:26.482 "uuid": "9f9906bc-43ba-4182-a531-1156930ea57c", 00:23:26.483 "strip_size_kb": 64, 00:23:26.483 "state": "configuring", 00:23:26.483 "raid_level": "raid0", 00:23:26.483 "superblock": true, 00:23:26.483 "num_base_bdevs": 4, 00:23:26.483 "num_base_bdevs_discovered": 1, 00:23:26.483 "num_base_bdevs_operational": 4, 00:23:26.483 "base_bdevs_list": [ 00:23:26.483 { 00:23:26.483 "name": "pt1", 00:23:26.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:26.483 "is_configured": true, 00:23:26.483 "data_offset": 2048, 00:23:26.483 "data_size": 63488 00:23:26.483 }, 00:23:26.483 { 00:23:26.483 "name": null, 00:23:26.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:26.483 "is_configured": false, 00:23:26.483 "data_offset": 2048, 00:23:26.483 "data_size": 63488 00:23:26.483 }, 00:23:26.483 { 00:23:26.483 "name": null, 00:23:26.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:26.483 "is_configured": false, 00:23:26.483 "data_offset": 2048, 00:23:26.483 "data_size": 63488 00:23:26.483 }, 00:23:26.483 { 00:23:26.483 "name": null, 00:23:26.483 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:26.483 "is_configured": false, 00:23:26.483 "data_offset": 2048, 00:23:26.483 "data_size": 63488 00:23:26.483 } 00:23:26.483 ] 00:23:26.483 }' 00:23:26.483 11:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.483 11:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:27.049 11:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:27.049 11:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:27.049 11:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:27.307 [2024-07-13 11:35:01.969200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:27.307 [2024-07-13 11:35:01.969815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.307 [2024-07-13 11:35:01.970065] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:27.307 [2024-07-13 11:35:01.970331] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.307 [2024-07-13 11:35:01.970988] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.307 [2024-07-13 11:35:01.971230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:27.307 [2024-07-13 11:35:01.971519] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:27.307 [2024-07-13 11:35:01.971675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:27.307 pt2 00:23:27.307 11:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:27.307 11:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:27.307 11:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:27.565 [2024-07-13 11:35:02.193225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:27.565 [2024-07-13 11:35:02.193513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.565 [2024-07-13 11:35:02.193721] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:27.565 [2024-07-13 11:35:02.193949] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.565 [2024-07-13 11:35:02.194498] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.565 [2024-07-13 11:35:02.194726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:27.565 [2024-07-13 11:35:02.195054] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:27.565 [2024-07-13 11:35:02.195210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:27.565 pt3 00:23:27.565 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:27.565 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:27.565 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:27.824 [2024-07-13 11:35:02.433251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:27.824 [2024-07-13 11:35:02.433511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.824 [2024-07-13 11:35:02.433719] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:23:27.824 [2024-07-13 11:35:02.433953] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.824 [2024-07-13 11:35:02.434498] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.824 [2024-07-13 11:35:02.434729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:27.824 [2024-07-13 11:35:02.435056] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:27.824 [2024-07-13 11:35:02.435217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:27.824 [2024-07-13 11:35:02.435429] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:23:27.824 [2024-07-13 11:35:02.435534] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:27.824 [2024-07-13 11:35:02.435654] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:27.824 [2024-07-13 11:35:02.436050] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:23:27.824 [2024-07-13 11:35:02.436193] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:23:27.824 [2024-07-13 11:35:02.436406] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.824 pt4 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.824 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.082 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:28.082 "name": "raid_bdev1", 00:23:28.082 "uuid": "9f9906bc-43ba-4182-a531-1156930ea57c", 00:23:28.082 "strip_size_kb": 64, 00:23:28.082 "state": "online", 00:23:28.082 "raid_level": "raid0", 00:23:28.082 "superblock": true, 00:23:28.082 "num_base_bdevs": 4, 00:23:28.082 "num_base_bdevs_discovered": 4, 00:23:28.082 "num_base_bdevs_operational": 4, 00:23:28.082 "base_bdevs_list": [ 00:23:28.082 { 00:23:28.082 "name": "pt1", 00:23:28.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.082 "is_configured": true, 00:23:28.082 "data_offset": 2048, 00:23:28.082 "data_size": 63488 00:23:28.083 }, 00:23:28.083 { 00:23:28.083 "name": "pt2", 00:23:28.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.083 "is_configured": true, 00:23:28.083 "data_offset": 2048, 00:23:28.083 "data_size": 63488 00:23:28.083 }, 00:23:28.083 { 00:23:28.083 "name": "pt3", 00:23:28.083 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:28.083 "is_configured": true, 00:23:28.083 "data_offset": 2048, 00:23:28.083 "data_size": 63488 00:23:28.083 }, 00:23:28.083 { 00:23:28.083 "name": "pt4", 00:23:28.083 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:28.083 "is_configured": true, 00:23:28.083 "data_offset": 2048, 00:23:28.083 "data_size": 63488 00:23:28.083 } 00:23:28.083 ] 00:23:28.083 }' 00:23:28.083 11:35:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:28.083 11:35:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:28.648 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:28.648 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:28.648 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:28.648 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:28.648 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:28.648 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:28.648 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:28.648 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:28.906 [2024-07-13 11:35:03.417694] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.906 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:28.906 "name": "raid_bdev1", 00:23:28.906 "aliases": [ 00:23:28.906 "9f9906bc-43ba-4182-a531-1156930ea57c" 00:23:28.906 ], 00:23:28.906 "product_name": "Raid Volume", 00:23:28.906 "block_size": 512, 00:23:28.906 "num_blocks": 253952, 00:23:28.906 "uuid": "9f9906bc-43ba-4182-a531-1156930ea57c", 00:23:28.906 "assigned_rate_limits": { 00:23:28.906 "rw_ios_per_sec": 0, 00:23:28.906 "rw_mbytes_per_sec": 0, 00:23:28.906 "r_mbytes_per_sec": 0, 00:23:28.906 "w_mbytes_per_sec": 0 00:23:28.906 }, 00:23:28.906 "claimed": false, 00:23:28.906 "zoned": false, 00:23:28.906 "supported_io_types": { 00:23:28.906 "read": true, 00:23:28.906 "write": true, 00:23:28.906 "unmap": true, 00:23:28.906 "flush": true, 00:23:28.906 "reset": true, 00:23:28.906 "nvme_admin": false, 00:23:28.906 "nvme_io": false, 00:23:28.906 "nvme_io_md": false, 00:23:28.906 "write_zeroes": true, 00:23:28.906 "zcopy": false, 00:23:28.906 "get_zone_info": false, 00:23:28.906 "zone_management": false, 00:23:28.906 "zone_append": false, 00:23:28.906 "compare": false, 00:23:28.906 "compare_and_write": false, 00:23:28.906 "abort": false, 00:23:28.906 "seek_hole": false, 00:23:28.906 "seek_data": false, 00:23:28.906 "copy": false, 00:23:28.906 "nvme_iov_md": false 00:23:28.906 }, 00:23:28.906 "memory_domains": [ 00:23:28.906 { 00:23:28.906 "dma_device_id": "system", 00:23:28.906 "dma_device_type": 1 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.906 "dma_device_type": 2 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "dma_device_id": "system", 00:23:28.906 "dma_device_type": 1 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.906 "dma_device_type": 2 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "dma_device_id": "system", 00:23:28.906 "dma_device_type": 1 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.906 "dma_device_type": 2 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "dma_device_id": "system", 00:23:28.906 "dma_device_type": 1 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.906 "dma_device_type": 2 00:23:28.906 } 00:23:28.906 ], 00:23:28.906 "driver_specific": { 00:23:28.906 "raid": { 00:23:28.906 "uuid": "9f9906bc-43ba-4182-a531-1156930ea57c", 00:23:28.906 "strip_size_kb": 64, 00:23:28.906 "state": "online", 00:23:28.906 "raid_level": "raid0", 00:23:28.906 "superblock": true, 00:23:28.906 "num_base_bdevs": 4, 00:23:28.906 "num_base_bdevs_discovered": 4, 00:23:28.906 "num_base_bdevs_operational": 4, 00:23:28.906 "base_bdevs_list": [ 00:23:28.906 { 00:23:28.906 "name": "pt1", 00:23:28.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:28.906 "is_configured": true, 00:23:28.906 "data_offset": 2048, 00:23:28.906 "data_size": 63488 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "name": "pt2", 00:23:28.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:28.906 "is_configured": true, 00:23:28.906 "data_offset": 2048, 00:23:28.906 "data_size": 63488 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "name": "pt3", 00:23:28.906 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:28.906 "is_configured": true, 00:23:28.906 "data_offset": 2048, 00:23:28.906 "data_size": 63488 00:23:28.906 }, 00:23:28.906 { 00:23:28.906 "name": "pt4", 00:23:28.906 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:28.906 "is_configured": true, 00:23:28.906 "data_offset": 2048, 00:23:28.906 "data_size": 63488 00:23:28.906 } 00:23:28.906 ] 00:23:28.907 } 00:23:28.907 } 00:23:28.907 }' 00:23:28.907 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:28.907 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:28.907 pt2 00:23:28.907 pt3 00:23:28.907 pt4' 00:23:28.907 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:28.907 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:28.907 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:29.165 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:29.165 "name": "pt1", 00:23:29.165 "aliases": [ 00:23:29.165 "00000000-0000-0000-0000-000000000001" 00:23:29.165 ], 00:23:29.165 "product_name": "passthru", 00:23:29.165 "block_size": 512, 00:23:29.165 "num_blocks": 65536, 00:23:29.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:29.165 "assigned_rate_limits": { 00:23:29.165 "rw_ios_per_sec": 0, 00:23:29.165 "rw_mbytes_per_sec": 0, 00:23:29.165 "r_mbytes_per_sec": 0, 00:23:29.165 "w_mbytes_per_sec": 0 00:23:29.165 }, 00:23:29.165 "claimed": true, 00:23:29.165 "claim_type": "exclusive_write", 00:23:29.165 "zoned": false, 00:23:29.165 "supported_io_types": { 00:23:29.165 "read": true, 00:23:29.165 "write": true, 00:23:29.165 "unmap": true, 00:23:29.165 "flush": true, 00:23:29.165 "reset": true, 00:23:29.165 "nvme_admin": false, 00:23:29.165 "nvme_io": false, 00:23:29.165 "nvme_io_md": false, 00:23:29.165 "write_zeroes": true, 00:23:29.165 "zcopy": true, 00:23:29.165 "get_zone_info": false, 00:23:29.165 "zone_management": false, 00:23:29.165 "zone_append": false, 00:23:29.165 "compare": false, 00:23:29.165 "compare_and_write": false, 00:23:29.165 "abort": true, 00:23:29.165 "seek_hole": false, 00:23:29.165 "seek_data": false, 00:23:29.165 "copy": true, 00:23:29.165 "nvme_iov_md": false 00:23:29.165 }, 00:23:29.165 "memory_domains": [ 00:23:29.165 { 00:23:29.165 "dma_device_id": "system", 00:23:29.165 "dma_device_type": 1 00:23:29.165 }, 00:23:29.165 { 00:23:29.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.165 "dma_device_type": 2 00:23:29.165 } 00:23:29.165 ], 00:23:29.165 "driver_specific": { 00:23:29.165 "passthru": { 00:23:29.165 "name": "pt1", 00:23:29.165 "base_bdev_name": "malloc1" 00:23:29.165 } 00:23:29.165 } 00:23:29.165 }' 00:23:29.165 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.165 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.165 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:29.165 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.424 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.424 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:29.424 11:35:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.424 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.424 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.424 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.424 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.683 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:29.683 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:29.683 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:29.683 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.071 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.071 "name": "pt2", 00:23:30.071 "aliases": [ 00:23:30.071 "00000000-0000-0000-0000-000000000002" 00:23:30.071 ], 00:23:30.071 "product_name": "passthru", 00:23:30.071 "block_size": 512, 00:23:30.071 "num_blocks": 65536, 00:23:30.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:30.071 "assigned_rate_limits": { 00:23:30.071 "rw_ios_per_sec": 0, 00:23:30.071 "rw_mbytes_per_sec": 0, 00:23:30.071 "r_mbytes_per_sec": 0, 00:23:30.071 "w_mbytes_per_sec": 0 00:23:30.071 }, 00:23:30.071 "claimed": true, 00:23:30.071 "claim_type": "exclusive_write", 00:23:30.071 "zoned": false, 00:23:30.071 "supported_io_types": { 00:23:30.071 "read": true, 00:23:30.071 "write": true, 00:23:30.071 "unmap": true, 00:23:30.071 "flush": true, 00:23:30.071 "reset": true, 00:23:30.071 "nvme_admin": false, 00:23:30.071 "nvme_io": false, 00:23:30.071 "nvme_io_md": false, 00:23:30.071 "write_zeroes": true, 00:23:30.071 "zcopy": true, 00:23:30.071 "get_zone_info": false, 00:23:30.071 "zone_management": false, 00:23:30.071 "zone_append": false, 00:23:30.071 "compare": false, 00:23:30.071 "compare_and_write": false, 00:23:30.071 "abort": true, 00:23:30.071 "seek_hole": false, 00:23:30.071 "seek_data": false, 00:23:30.071 "copy": true, 00:23:30.071 "nvme_iov_md": false 00:23:30.071 }, 00:23:30.071 "memory_domains": [ 00:23:30.071 { 00:23:30.071 "dma_device_id": "system", 00:23:30.071 "dma_device_type": 1 00:23:30.071 }, 00:23:30.071 { 00:23:30.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.071 "dma_device_type": 2 00:23:30.071 } 00:23:30.071 ], 00:23:30.071 "driver_specific": { 00:23:30.071 "passthru": { 00:23:30.071 "name": "pt2", 00:23:30.071 "base_bdev_name": "malloc2" 00:23:30.071 } 00:23:30.071 } 00:23:30.071 }' 00:23:30.071 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.071 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.071 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.071 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.071 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.071 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.071 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.071 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.344 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.344 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.344 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.344 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.344 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.344 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:30.344 11:35:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.603 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.603 "name": "pt3", 00:23:30.603 "aliases": [ 00:23:30.603 "00000000-0000-0000-0000-000000000003" 00:23:30.603 ], 00:23:30.603 "product_name": "passthru", 00:23:30.603 "block_size": 512, 00:23:30.603 "num_blocks": 65536, 00:23:30.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:30.603 "assigned_rate_limits": { 00:23:30.603 "rw_ios_per_sec": 0, 00:23:30.603 "rw_mbytes_per_sec": 0, 00:23:30.603 "r_mbytes_per_sec": 0, 00:23:30.603 "w_mbytes_per_sec": 0 00:23:30.603 }, 00:23:30.603 "claimed": true, 00:23:30.603 "claim_type": "exclusive_write", 00:23:30.603 "zoned": false, 00:23:30.603 "supported_io_types": { 00:23:30.603 "read": true, 00:23:30.603 "write": true, 00:23:30.603 "unmap": true, 00:23:30.603 "flush": true, 00:23:30.603 "reset": true, 00:23:30.603 "nvme_admin": false, 00:23:30.603 "nvme_io": false, 00:23:30.603 "nvme_io_md": false, 00:23:30.603 "write_zeroes": true, 00:23:30.603 "zcopy": true, 00:23:30.603 "get_zone_info": false, 00:23:30.603 "zone_management": false, 00:23:30.603 "zone_append": false, 00:23:30.603 "compare": false, 00:23:30.603 "compare_and_write": false, 00:23:30.603 "abort": true, 00:23:30.603 "seek_hole": false, 00:23:30.603 "seek_data": false, 00:23:30.603 "copy": true, 00:23:30.603 "nvme_iov_md": false 00:23:30.603 }, 00:23:30.603 "memory_domains": [ 00:23:30.603 { 00:23:30.603 "dma_device_id": "system", 00:23:30.603 "dma_device_type": 1 00:23:30.603 }, 00:23:30.603 { 00:23:30.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.603 "dma_device_type": 2 00:23:30.603 } 00:23:30.603 ], 00:23:30.603 "driver_specific": { 00:23:30.603 "passthru": { 00:23:30.603 "name": "pt3", 00:23:30.603 "base_bdev_name": "malloc3" 00:23:30.603 } 00:23:30.603 } 00:23:30.603 }' 00:23:30.603 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.603 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.603 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.604 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.861 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.861 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.861 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.861 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.861 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.861 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.861 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.119 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:31.119 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:31.119 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:31.119 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:31.378 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:31.378 "name": "pt4", 00:23:31.378 "aliases": [ 00:23:31.378 "00000000-0000-0000-0000-000000000004" 00:23:31.378 ], 00:23:31.378 "product_name": "passthru", 00:23:31.378 "block_size": 512, 00:23:31.378 "num_blocks": 65536, 00:23:31.378 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:31.378 "assigned_rate_limits": { 00:23:31.378 "rw_ios_per_sec": 0, 00:23:31.378 "rw_mbytes_per_sec": 0, 00:23:31.378 "r_mbytes_per_sec": 0, 00:23:31.378 "w_mbytes_per_sec": 0 00:23:31.378 }, 00:23:31.378 "claimed": true, 00:23:31.378 "claim_type": "exclusive_write", 00:23:31.378 "zoned": false, 00:23:31.378 "supported_io_types": { 00:23:31.378 "read": true, 00:23:31.378 "write": true, 00:23:31.378 "unmap": true, 00:23:31.378 "flush": true, 00:23:31.378 "reset": true, 00:23:31.378 "nvme_admin": false, 00:23:31.378 "nvme_io": false, 00:23:31.378 "nvme_io_md": false, 00:23:31.378 "write_zeroes": true, 00:23:31.378 "zcopy": true, 00:23:31.378 "get_zone_info": false, 00:23:31.378 "zone_management": false, 00:23:31.378 "zone_append": false, 00:23:31.378 "compare": false, 00:23:31.378 "compare_and_write": false, 00:23:31.378 "abort": true, 00:23:31.378 "seek_hole": false, 00:23:31.378 "seek_data": false, 00:23:31.378 "copy": true, 00:23:31.378 "nvme_iov_md": false 00:23:31.378 }, 00:23:31.378 "memory_domains": [ 00:23:31.378 { 00:23:31.378 "dma_device_id": "system", 00:23:31.378 "dma_device_type": 1 00:23:31.378 }, 00:23:31.378 { 00:23:31.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.378 "dma_device_type": 2 00:23:31.378 } 00:23:31.378 ], 00:23:31.378 "driver_specific": { 00:23:31.378 "passthru": { 00:23:31.378 "name": "pt4", 00:23:31.378 "base_bdev_name": "malloc4" 00:23:31.378 } 00:23:31.378 } 00:23:31.378 }' 00:23:31.378 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.378 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.378 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:31.378 11:35:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.378 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.378 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:31.378 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.636 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.636 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:31.636 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.636 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.636 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:31.636 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:31.636 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:31.894 [2024-07-13 11:35:06.530205] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9f9906bc-43ba-4182-a531-1156930ea57c '!=' 9f9906bc-43ba-4182-a531-1156930ea57c ']' 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 136763 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 136763 ']' 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 136763 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 136763 00:23:31.894 killing process with pid 136763 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 136763' 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 136763 00:23:31.894 11:35:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 136763 00:23:31.894 [2024-07-13 11:35:06.565025] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:31.894 [2024-07-13 11:35:06.565085] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:31.894 [2024-07-13 11:35:06.565141] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:31.894 [2024-07-13 11:35:06.565150] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:23:32.152 [2024-07-13 11:35:06.815395] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:33.086 ************************************ 00:23:33.086 END TEST raid_superblock_test 00:23:33.086 ************************************ 00:23:33.086 11:35:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:23:33.086 00:23:33.086 real 0m17.213s 00:23:33.086 user 0m31.594s 00:23:33.086 sys 0m1.807s 00:23:33.086 11:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:33.086 11:35:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.086 11:35:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:33.086 11:35:07 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:23:33.086 11:35:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:33.086 11:35:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:33.086 11:35:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:33.086 ************************************ 00:23:33.086 START TEST raid_read_error_test 00:23:33.086 ************************************ 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.19umTt4cjf 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=137345 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 137345 /var/tmp/spdk-raid.sock 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 137345 ']' 00:23:33.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.086 11:35:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:33.345 [2024-07-13 11:35:07.872151] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:33.345 [2024-07-13 11:35:07.872556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137345 ] 00:23:33.345 [2024-07-13 11:35:08.046500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.604 [2024-07-13 11:35:08.290505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.864 [2024-07-13 11:35:08.482449] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:34.122 11:35:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.122 11:35:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:34.122 11:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:34.122 11:35:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:34.381 BaseBdev1_malloc 00:23:34.381 11:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:34.639 true 00:23:34.639 11:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:34.898 [2024-07-13 11:35:09.488562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:34.898 [2024-07-13 11:35:09.488829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:34.898 [2024-07-13 11:35:09.488899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:34.898 [2024-07-13 11:35:09.489148] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:34.898 [2024-07-13 11:35:09.491452] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:34.898 [2024-07-13 11:35:09.491627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:34.898 BaseBdev1 00:23:34.898 11:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:34.898 11:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:35.157 BaseBdev2_malloc 00:23:35.157 11:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:35.414 true 00:23:35.414 11:35:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:35.414 [2024-07-13 11:35:10.103055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:35.414 [2024-07-13 11:35:10.103292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:35.414 [2024-07-13 11:35:10.103432] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:35.414 [2024-07-13 11:35:10.103543] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:35.414 [2024-07-13 11:35:10.105900] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:35.414 [2024-07-13 11:35:10.106064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:35.414 BaseBdev2 00:23:35.414 11:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:35.414 11:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:35.672 BaseBdev3_malloc 00:23:35.672 11:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:35.930 true 00:23:35.930 11:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:36.188 [2024-07-13 11:35:10.692152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:36.188 [2024-07-13 11:35:10.692411] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.188 [2024-07-13 11:35:10.692478] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:36.188 [2024-07-13 11:35:10.692597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.188 [2024-07-13 11:35:10.694843] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.188 [2024-07-13 11:35:10.695036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:36.188 BaseBdev3 00:23:36.188 11:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:36.188 11:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:36.188 BaseBdev4_malloc 00:23:36.188 11:35:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:36.446 true 00:23:36.446 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:36.705 [2024-07-13 11:35:11.278329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:36.705 [2024-07-13 11:35:11.278555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:36.705 [2024-07-13 11:35:11.278677] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:36.705 [2024-07-13 11:35:11.278781] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:36.705 [2024-07-13 11:35:11.281178] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:36.705 [2024-07-13 11:35:11.281323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:36.705 BaseBdev4 00:23:36.705 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:36.963 [2024-07-13 11:35:11.474420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:36.963 [2024-07-13 11:35:11.476586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:36.963 [2024-07-13 11:35:11.476790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:36.963 [2024-07-13 11:35:11.476901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:36.963 [2024-07-13 11:35:11.477257] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:23:36.963 [2024-07-13 11:35:11.477362] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:36.963 [2024-07-13 11:35:11.477528] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:36.963 [2024-07-13 11:35:11.477991] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:23:36.963 [2024-07-13 11:35:11.478103] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:23:36.963 [2024-07-13 11:35:11.478343] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:36.963 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:36.963 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:36.963 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:36.963 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:36.963 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:36.963 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.963 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.964 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.964 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.964 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.964 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.964 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.964 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:36.964 "name": "raid_bdev1", 00:23:36.964 "uuid": "a00e1872-2474-49f3-9fec-7de2403be6ac", 00:23:36.964 "strip_size_kb": 64, 00:23:36.964 "state": "online", 00:23:36.964 "raid_level": "raid0", 00:23:36.964 "superblock": true, 00:23:36.964 "num_base_bdevs": 4, 00:23:36.964 "num_base_bdevs_discovered": 4, 00:23:36.964 "num_base_bdevs_operational": 4, 00:23:36.964 "base_bdevs_list": [ 00:23:36.964 { 00:23:36.964 "name": "BaseBdev1", 00:23:36.964 "uuid": "53d52040-ac56-56ab-95bd-98c1c403d6b4", 00:23:36.964 "is_configured": true, 00:23:36.964 "data_offset": 2048, 00:23:36.964 "data_size": 63488 00:23:36.964 }, 00:23:36.964 { 00:23:36.964 "name": "BaseBdev2", 00:23:36.964 "uuid": "1c51274d-ae8a-509c-852f-181f90e519ac", 00:23:36.964 "is_configured": true, 00:23:36.964 "data_offset": 2048, 00:23:36.964 "data_size": 63488 00:23:36.964 }, 00:23:36.964 { 00:23:36.964 "name": "BaseBdev3", 00:23:36.964 "uuid": "66284e91-fbb6-59bb-afe4-52df4985e3c1", 00:23:36.964 "is_configured": true, 00:23:36.964 "data_offset": 2048, 00:23:36.964 "data_size": 63488 00:23:36.964 }, 00:23:36.964 { 00:23:36.964 "name": "BaseBdev4", 00:23:36.964 "uuid": "dea1c95e-409b-5f4e-be43-ec2d59195d4e", 00:23:36.964 "is_configured": true, 00:23:36.964 "data_offset": 2048, 00:23:36.964 "data_size": 63488 00:23:36.964 } 00:23:36.964 ] 00:23:36.964 }' 00:23:36.964 11:35:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:36.964 11:35:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.898 11:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:37.898 11:35:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:37.898 [2024-07-13 11:35:12.351707] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.831 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.090 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:39.090 "name": "raid_bdev1", 00:23:39.090 "uuid": "a00e1872-2474-49f3-9fec-7de2403be6ac", 00:23:39.090 "strip_size_kb": 64, 00:23:39.090 "state": "online", 00:23:39.090 "raid_level": "raid0", 00:23:39.090 "superblock": true, 00:23:39.090 "num_base_bdevs": 4, 00:23:39.090 "num_base_bdevs_discovered": 4, 00:23:39.090 "num_base_bdevs_operational": 4, 00:23:39.090 "base_bdevs_list": [ 00:23:39.090 { 00:23:39.090 "name": "BaseBdev1", 00:23:39.090 "uuid": "53d52040-ac56-56ab-95bd-98c1c403d6b4", 00:23:39.090 "is_configured": true, 00:23:39.090 "data_offset": 2048, 00:23:39.090 "data_size": 63488 00:23:39.090 }, 00:23:39.090 { 00:23:39.090 "name": "BaseBdev2", 00:23:39.090 "uuid": "1c51274d-ae8a-509c-852f-181f90e519ac", 00:23:39.090 "is_configured": true, 00:23:39.090 "data_offset": 2048, 00:23:39.090 "data_size": 63488 00:23:39.090 }, 00:23:39.090 { 00:23:39.090 "name": "BaseBdev3", 00:23:39.090 "uuid": "66284e91-fbb6-59bb-afe4-52df4985e3c1", 00:23:39.090 "is_configured": true, 00:23:39.090 "data_offset": 2048, 00:23:39.090 "data_size": 63488 00:23:39.090 }, 00:23:39.090 { 00:23:39.090 "name": "BaseBdev4", 00:23:39.090 "uuid": "dea1c95e-409b-5f4e-be43-ec2d59195d4e", 00:23:39.090 "is_configured": true, 00:23:39.090 "data_offset": 2048, 00:23:39.090 "data_size": 63488 00:23:39.090 } 00:23:39.090 ] 00:23:39.090 }' 00:23:39.090 11:35:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:39.090 11:35:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:40.025 [2024-07-13 11:35:14.644194] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:40.025 [2024-07-13 11:35:14.644439] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:40.025 [2024-07-13 11:35:14.647173] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:40.025 [2024-07-13 11:35:14.647367] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:40.025 [2024-07-13 11:35:14.647450] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:40.025 [2024-07-13 11:35:14.647684] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:23:40.025 0 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 137345 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 137345 ']' 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 137345 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137345 00:23:40.025 killing process with pid 137345 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137345' 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 137345 00:23:40.025 11:35:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 137345 00:23:40.025 [2024-07-13 11:35:14.678758] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:40.284 [2024-07-13 11:35:14.909320] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.19umTt4cjf 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:41.660 ************************************ 00:23:41.660 END TEST raid_read_error_test 00:23:41.660 ************************************ 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:23:41.660 00:23:41.660 real 0m8.198s 00:23:41.660 user 0m12.569s 00:23:41.660 sys 0m0.915s 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:41.660 11:35:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.660 11:35:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:41.660 11:35:16 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:23:41.660 11:35:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:41.660 11:35:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.660 11:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:41.660 ************************************ 00:23:41.660 START TEST raid_write_error_test 00:23:41.660 ************************************ 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.3HopnTrLWD 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=137572 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 137572 /var/tmp/spdk-raid.sock 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 137572 ']' 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:41.660 11:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.661 11:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:41.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:41.661 11:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.661 11:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.661 [2024-07-13 11:35:16.104147] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:41.661 [2024-07-13 11:35:16.104525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137572 ] 00:23:41.661 [2024-07-13 11:35:16.257406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.919 [2024-07-13 11:35:16.439475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.919 [2024-07-13 11:35:16.626632] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:42.487 11:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.487 11:35:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:42.487 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:42.487 11:35:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:42.487 BaseBdev1_malloc 00:23:42.487 11:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:42.745 true 00:23:42.745 11:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:43.004 [2024-07-13 11:35:17.658494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:43.004 [2024-07-13 11:35:17.658716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.004 [2024-07-13 11:35:17.658869] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:23:43.004 [2024-07-13 11:35:17.658982] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.004 [2024-07-13 11:35:17.661425] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.004 [2024-07-13 11:35:17.661578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:43.004 BaseBdev1 00:23:43.004 11:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:43.004 11:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:43.263 BaseBdev2_malloc 00:23:43.263 11:35:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:43.522 true 00:23:43.522 11:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:43.781 [2024-07-13 11:35:18.290626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:43.781 [2024-07-13 11:35:18.290835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:43.781 [2024-07-13 11:35:18.290984] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:43.781 [2024-07-13 11:35:18.291103] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:43.781 [2024-07-13 11:35:18.293452] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:43.781 [2024-07-13 11:35:18.293606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:43.781 BaseBdev2 00:23:43.781 11:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:43.781 11:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:44.040 BaseBdev3_malloc 00:23:44.040 11:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:44.040 true 00:23:44.040 11:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:44.299 [2024-07-13 11:35:18.920141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:44.299 [2024-07-13 11:35:18.920344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.299 [2024-07-13 11:35:18.920412] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:23:44.299 [2024-07-13 11:35:18.920653] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.299 [2024-07-13 11:35:18.922920] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.299 [2024-07-13 11:35:18.923098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:44.299 BaseBdev3 00:23:44.299 11:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:44.299 11:35:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:44.558 BaseBdev4_malloc 00:23:44.558 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:44.817 true 00:23:44.817 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:44.817 [2024-07-13 11:35:19.516705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:44.817 [2024-07-13 11:35:19.516912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:44.817 [2024-07-13 11:35:19.516978] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:44.817 [2024-07-13 11:35:19.517210] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:44.817 [2024-07-13 11:35:19.519460] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:44.817 [2024-07-13 11:35:19.519658] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:44.817 BaseBdev4 00:23:44.817 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:45.076 [2024-07-13 11:35:19.704773] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:45.076 [2024-07-13 11:35:19.706501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.076 [2024-07-13 11:35:19.706704] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:45.076 [2024-07-13 11:35:19.706896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:45.076 [2024-07-13 11:35:19.707271] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:23:45.076 [2024-07-13 11:35:19.707404] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:45.076 [2024-07-13 11:35:19.707567] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:45.076 [2024-07-13 11:35:19.707948] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:23:45.076 [2024-07-13 11:35:19.708105] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:23:45.076 [2024-07-13 11:35:19.708366] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.076 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:45.335 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:45.335 "name": "raid_bdev1", 00:23:45.335 "uuid": "22d7c99d-9de7-4065-8dba-796c9dd92eb1", 00:23:45.335 "strip_size_kb": 64, 00:23:45.335 "state": "online", 00:23:45.335 "raid_level": "raid0", 00:23:45.335 "superblock": true, 00:23:45.335 "num_base_bdevs": 4, 00:23:45.335 "num_base_bdevs_discovered": 4, 00:23:45.335 "num_base_bdevs_operational": 4, 00:23:45.335 "base_bdevs_list": [ 00:23:45.335 { 00:23:45.335 "name": "BaseBdev1", 00:23:45.335 "uuid": "302ade57-2ee0-5f7a-979e-73aee13f90d3", 00:23:45.335 "is_configured": true, 00:23:45.335 "data_offset": 2048, 00:23:45.335 "data_size": 63488 00:23:45.335 }, 00:23:45.335 { 00:23:45.335 "name": "BaseBdev2", 00:23:45.335 "uuid": "8c28973f-2f80-57f3-93f2-f9d084a1f5b2", 00:23:45.335 "is_configured": true, 00:23:45.335 "data_offset": 2048, 00:23:45.335 "data_size": 63488 00:23:45.335 }, 00:23:45.335 { 00:23:45.335 "name": "BaseBdev3", 00:23:45.335 "uuid": "dfb4b818-acfa-56c1-9131-21fb73a913c2", 00:23:45.335 "is_configured": true, 00:23:45.335 "data_offset": 2048, 00:23:45.335 "data_size": 63488 00:23:45.335 }, 00:23:45.335 { 00:23:45.335 "name": "BaseBdev4", 00:23:45.335 "uuid": "f387aa14-3312-5812-9dd2-8d20e29cabe1", 00:23:45.335 "is_configured": true, 00:23:45.335 "data_offset": 2048, 00:23:45.335 "data_size": 63488 00:23:45.335 } 00:23:45.335 ] 00:23:45.335 }' 00:23:45.335 11:35:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:45.335 11:35:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.903 11:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:45.903 11:35:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:45.903 [2024-07-13 11:35:20.557944] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:23:46.839 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.098 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.357 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:47.357 "name": "raid_bdev1", 00:23:47.357 "uuid": "22d7c99d-9de7-4065-8dba-796c9dd92eb1", 00:23:47.357 "strip_size_kb": 64, 00:23:47.357 "state": "online", 00:23:47.357 "raid_level": "raid0", 00:23:47.357 "superblock": true, 00:23:47.357 "num_base_bdevs": 4, 00:23:47.357 "num_base_bdevs_discovered": 4, 00:23:47.357 "num_base_bdevs_operational": 4, 00:23:47.357 "base_bdevs_list": [ 00:23:47.357 { 00:23:47.357 "name": "BaseBdev1", 00:23:47.357 "uuid": "302ade57-2ee0-5f7a-979e-73aee13f90d3", 00:23:47.357 "is_configured": true, 00:23:47.357 "data_offset": 2048, 00:23:47.357 "data_size": 63488 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "name": "BaseBdev2", 00:23:47.357 "uuid": "8c28973f-2f80-57f3-93f2-f9d084a1f5b2", 00:23:47.357 "is_configured": true, 00:23:47.357 "data_offset": 2048, 00:23:47.357 "data_size": 63488 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "name": "BaseBdev3", 00:23:47.357 "uuid": "dfb4b818-acfa-56c1-9131-21fb73a913c2", 00:23:47.357 "is_configured": true, 00:23:47.357 "data_offset": 2048, 00:23:47.357 "data_size": 63488 00:23:47.357 }, 00:23:47.357 { 00:23:47.357 "name": "BaseBdev4", 00:23:47.357 "uuid": "f387aa14-3312-5812-9dd2-8d20e29cabe1", 00:23:47.357 "is_configured": true, 00:23:47.357 "data_offset": 2048, 00:23:47.357 "data_size": 63488 00:23:47.357 } 00:23:47.357 ] 00:23:47.357 }' 00:23:47.357 11:35:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:47.357 11:35:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.292 11:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:48.292 [2024-07-13 11:35:22.975259] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:48.292 [2024-07-13 11:35:22.975614] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:48.292 [2024-07-13 11:35:22.978211] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.292 [2024-07-13 11:35:22.978374] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:48.292 [2024-07-13 11:35:22.978457] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:48.292 [2024-07-13 11:35:22.978694] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:23:48.292 0 00:23:48.292 11:35:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 137572 00:23:48.292 11:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 137572 ']' 00:23:48.292 11:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 137572 00:23:48.292 11:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:23:48.292 11:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.292 11:35:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137572 00:23:48.292 killing process with pid 137572 00:23:48.292 11:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:48.292 11:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:48.292 11:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137572' 00:23:48.292 11:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 137572 00:23:48.292 11:35:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 137572 00:23:48.292 [2024-07-13 11:35:23.011931] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:48.574 [2024-07-13 11:35:23.236425] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.3HopnTrLWD 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:49.952 ************************************ 00:23:49.952 END TEST raid_write_error_test 00:23:49.952 ************************************ 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:23:49.952 00:23:49.952 real 0m8.266s 00:23:49.952 user 0m12.712s 00:23:49.952 sys 0m0.951s 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:49.952 11:35:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.952 11:35:24 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:49.952 11:35:24 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:23:49.952 11:35:24 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:23:49.952 11:35:24 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:49.952 11:35:24 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:49.952 11:35:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:49.952 ************************************ 00:23:49.952 START TEST raid_state_function_test 00:23:49.952 ************************************ 00:23:49.952 11:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:23:49.952 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:49.952 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:49.952 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=137791 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:49.953 Process raid pid: 137791 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 137791' 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 137791 /var/tmp/spdk-raid.sock 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 137791 ']' 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:49.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.953 11:35:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.953 [2024-07-13 11:35:24.425343] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:49.953 [2024-07-13 11:35:24.425728] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.953 [2024-07-13 11:35:24.570774] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.212 [2024-07-13 11:35:24.755635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.212 [2024-07-13 11:35:24.942559] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:50.780 11:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.780 11:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:23:50.780 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:51.039 [2024-07-13 11:35:25.549196] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:51.039 [2024-07-13 11:35:25.549460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:51.039 [2024-07-13 11:35:25.549581] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:51.039 [2024-07-13 11:35:25.549639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:51.039 [2024-07-13 11:35:25.549726] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:51.039 [2024-07-13 11:35:25.549776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:51.039 [2024-07-13 11:35:25.549802] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:51.039 [2024-07-13 11:35:25.549908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.039 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:51.297 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:51.297 "name": "Existed_Raid", 00:23:51.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.297 "strip_size_kb": 64, 00:23:51.297 "state": "configuring", 00:23:51.297 "raid_level": "concat", 00:23:51.297 "superblock": false, 00:23:51.297 "num_base_bdevs": 4, 00:23:51.297 "num_base_bdevs_discovered": 0, 00:23:51.297 "num_base_bdevs_operational": 4, 00:23:51.297 "base_bdevs_list": [ 00:23:51.297 { 00:23:51.297 "name": "BaseBdev1", 00:23:51.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.297 "is_configured": false, 00:23:51.297 "data_offset": 0, 00:23:51.298 "data_size": 0 00:23:51.298 }, 00:23:51.298 { 00:23:51.298 "name": "BaseBdev2", 00:23:51.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.298 "is_configured": false, 00:23:51.298 "data_offset": 0, 00:23:51.298 "data_size": 0 00:23:51.298 }, 00:23:51.298 { 00:23:51.298 "name": "BaseBdev3", 00:23:51.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.298 "is_configured": false, 00:23:51.298 "data_offset": 0, 00:23:51.298 "data_size": 0 00:23:51.298 }, 00:23:51.298 { 00:23:51.298 "name": "BaseBdev4", 00:23:51.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:51.298 "is_configured": false, 00:23:51.298 "data_offset": 0, 00:23:51.298 "data_size": 0 00:23:51.298 } 00:23:51.298 ] 00:23:51.298 }' 00:23:51.298 11:35:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:51.298 11:35:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:51.865 11:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:52.124 [2024-07-13 11:35:26.665280] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:52.124 [2024-07-13 11:35:26.665417] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:52.124 11:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:52.124 [2024-07-13 11:35:26.857323] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:52.124 [2024-07-13 11:35:26.857480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:52.124 [2024-07-13 11:35:26.857573] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:52.124 [2024-07-13 11:35:26.857648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:52.124 [2024-07-13 11:35:26.857881] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:52.124 [2024-07-13 11:35:26.857947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:52.124 [2024-07-13 11:35:26.857974] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:52.124 [2024-07-13 11:35:26.858105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:52.124 11:35:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:52.383 [2024-07-13 11:35:27.078420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:52.383 BaseBdev1 00:23:52.383 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:52.383 11:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:52.383 11:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:52.383 11:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:52.383 11:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:52.383 11:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:52.383 11:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:52.642 11:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:52.901 [ 00:23:52.901 { 00:23:52.901 "name": "BaseBdev1", 00:23:52.901 "aliases": [ 00:23:52.901 "78944325-7b8a-46c5-a2e3-0f1be5dc6c99" 00:23:52.901 ], 00:23:52.901 "product_name": "Malloc disk", 00:23:52.901 "block_size": 512, 00:23:52.901 "num_blocks": 65536, 00:23:52.901 "uuid": "78944325-7b8a-46c5-a2e3-0f1be5dc6c99", 00:23:52.901 "assigned_rate_limits": { 00:23:52.901 "rw_ios_per_sec": 0, 00:23:52.901 "rw_mbytes_per_sec": 0, 00:23:52.901 "r_mbytes_per_sec": 0, 00:23:52.901 "w_mbytes_per_sec": 0 00:23:52.901 }, 00:23:52.901 "claimed": true, 00:23:52.901 "claim_type": "exclusive_write", 00:23:52.901 "zoned": false, 00:23:52.901 "supported_io_types": { 00:23:52.901 "read": true, 00:23:52.901 "write": true, 00:23:52.901 "unmap": true, 00:23:52.901 "flush": true, 00:23:52.901 "reset": true, 00:23:52.901 "nvme_admin": false, 00:23:52.901 "nvme_io": false, 00:23:52.901 "nvme_io_md": false, 00:23:52.901 "write_zeroes": true, 00:23:52.901 "zcopy": true, 00:23:52.901 "get_zone_info": false, 00:23:52.901 "zone_management": false, 00:23:52.901 "zone_append": false, 00:23:52.901 "compare": false, 00:23:52.901 "compare_and_write": false, 00:23:52.901 "abort": true, 00:23:52.901 "seek_hole": false, 00:23:52.901 "seek_data": false, 00:23:52.901 "copy": true, 00:23:52.901 "nvme_iov_md": false 00:23:52.901 }, 00:23:52.901 "memory_domains": [ 00:23:52.901 { 00:23:52.901 "dma_device_id": "system", 00:23:52.901 "dma_device_type": 1 00:23:52.901 }, 00:23:52.901 { 00:23:52.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.901 "dma_device_type": 2 00:23:52.901 } 00:23:52.901 ], 00:23:52.901 "driver_specific": {} 00:23:52.901 } 00:23:52.901 ] 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.901 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:53.160 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:53.160 "name": "Existed_Raid", 00:23:53.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.160 "strip_size_kb": 64, 00:23:53.160 "state": "configuring", 00:23:53.160 "raid_level": "concat", 00:23:53.160 "superblock": false, 00:23:53.160 "num_base_bdevs": 4, 00:23:53.160 "num_base_bdevs_discovered": 1, 00:23:53.160 "num_base_bdevs_operational": 4, 00:23:53.160 "base_bdevs_list": [ 00:23:53.160 { 00:23:53.160 "name": "BaseBdev1", 00:23:53.160 "uuid": "78944325-7b8a-46c5-a2e3-0f1be5dc6c99", 00:23:53.160 "is_configured": true, 00:23:53.160 "data_offset": 0, 00:23:53.160 "data_size": 65536 00:23:53.160 }, 00:23:53.160 { 00:23:53.160 "name": "BaseBdev2", 00:23:53.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.160 "is_configured": false, 00:23:53.160 "data_offset": 0, 00:23:53.160 "data_size": 0 00:23:53.160 }, 00:23:53.160 { 00:23:53.160 "name": "BaseBdev3", 00:23:53.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.160 "is_configured": false, 00:23:53.160 "data_offset": 0, 00:23:53.160 "data_size": 0 00:23:53.160 }, 00:23:53.160 { 00:23:53.160 "name": "BaseBdev4", 00:23:53.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.160 "is_configured": false, 00:23:53.160 "data_offset": 0, 00:23:53.160 "data_size": 0 00:23:53.160 } 00:23:53.160 ] 00:23:53.160 }' 00:23:53.160 11:35:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:53.160 11:35:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.728 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:53.987 [2024-07-13 11:35:28.478707] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:53.987 [2024-07-13 11:35:28.478772] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:53.987 [2024-07-13 11:35:28.662775] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:53.987 [2024-07-13 11:35:28.665841] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:53.987 [2024-07-13 11:35:28.666009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:53.987 [2024-07-13 11:35:28.666102] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:53.987 [2024-07-13 11:35:28.666224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:53.987 [2024-07-13 11:35:28.666312] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:53.987 [2024-07-13 11:35:28.666450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.987 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.246 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.246 "name": "Existed_Raid", 00:23:54.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.246 "strip_size_kb": 64, 00:23:54.246 "state": "configuring", 00:23:54.246 "raid_level": "concat", 00:23:54.246 "superblock": false, 00:23:54.246 "num_base_bdevs": 4, 00:23:54.246 "num_base_bdevs_discovered": 1, 00:23:54.246 "num_base_bdevs_operational": 4, 00:23:54.246 "base_bdevs_list": [ 00:23:54.246 { 00:23:54.246 "name": "BaseBdev1", 00:23:54.246 "uuid": "78944325-7b8a-46c5-a2e3-0f1be5dc6c99", 00:23:54.246 "is_configured": true, 00:23:54.246 "data_offset": 0, 00:23:54.246 "data_size": 65536 00:23:54.246 }, 00:23:54.246 { 00:23:54.246 "name": "BaseBdev2", 00:23:54.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.246 "is_configured": false, 00:23:54.246 "data_offset": 0, 00:23:54.246 "data_size": 0 00:23:54.246 }, 00:23:54.246 { 00:23:54.246 "name": "BaseBdev3", 00:23:54.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.246 "is_configured": false, 00:23:54.246 "data_offset": 0, 00:23:54.246 "data_size": 0 00:23:54.246 }, 00:23:54.246 { 00:23:54.246 "name": "BaseBdev4", 00:23:54.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.246 "is_configured": false, 00:23:54.246 "data_offset": 0, 00:23:54.246 "data_size": 0 00:23:54.246 } 00:23:54.246 ] 00:23:54.246 }' 00:23:54.246 11:35:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.246 11:35:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.814 11:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:55.072 [2024-07-13 11:35:29.758358] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:55.072 BaseBdev2 00:23:55.072 11:35:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:55.072 11:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:55.072 11:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:55.072 11:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:55.072 11:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:55.072 11:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:55.072 11:35:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:55.331 11:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:55.589 [ 00:23:55.589 { 00:23:55.589 "name": "BaseBdev2", 00:23:55.589 "aliases": [ 00:23:55.590 "4df5ff56-6be9-4672-9705-821926ea1d46" 00:23:55.590 ], 00:23:55.590 "product_name": "Malloc disk", 00:23:55.590 "block_size": 512, 00:23:55.590 "num_blocks": 65536, 00:23:55.590 "uuid": "4df5ff56-6be9-4672-9705-821926ea1d46", 00:23:55.590 "assigned_rate_limits": { 00:23:55.590 "rw_ios_per_sec": 0, 00:23:55.590 "rw_mbytes_per_sec": 0, 00:23:55.590 "r_mbytes_per_sec": 0, 00:23:55.590 "w_mbytes_per_sec": 0 00:23:55.590 }, 00:23:55.590 "claimed": true, 00:23:55.590 "claim_type": "exclusive_write", 00:23:55.590 "zoned": false, 00:23:55.590 "supported_io_types": { 00:23:55.590 "read": true, 00:23:55.590 "write": true, 00:23:55.590 "unmap": true, 00:23:55.590 "flush": true, 00:23:55.590 "reset": true, 00:23:55.590 "nvme_admin": false, 00:23:55.590 "nvme_io": false, 00:23:55.590 "nvme_io_md": false, 00:23:55.590 "write_zeroes": true, 00:23:55.590 "zcopy": true, 00:23:55.590 "get_zone_info": false, 00:23:55.590 "zone_management": false, 00:23:55.590 "zone_append": false, 00:23:55.590 "compare": false, 00:23:55.590 "compare_and_write": false, 00:23:55.590 "abort": true, 00:23:55.590 "seek_hole": false, 00:23:55.590 "seek_data": false, 00:23:55.590 "copy": true, 00:23:55.590 "nvme_iov_md": false 00:23:55.590 }, 00:23:55.590 "memory_domains": [ 00:23:55.590 { 00:23:55.590 "dma_device_id": "system", 00:23:55.590 "dma_device_type": 1 00:23:55.590 }, 00:23:55.590 { 00:23:55.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:55.590 "dma_device_type": 2 00:23:55.590 } 00:23:55.590 ], 00:23:55.590 "driver_specific": {} 00:23:55.590 } 00:23:55.590 ] 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.590 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.848 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.848 "name": "Existed_Raid", 00:23:55.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.848 "strip_size_kb": 64, 00:23:55.848 "state": "configuring", 00:23:55.848 "raid_level": "concat", 00:23:55.848 "superblock": false, 00:23:55.848 "num_base_bdevs": 4, 00:23:55.848 "num_base_bdevs_discovered": 2, 00:23:55.848 "num_base_bdevs_operational": 4, 00:23:55.848 "base_bdevs_list": [ 00:23:55.848 { 00:23:55.848 "name": "BaseBdev1", 00:23:55.848 "uuid": "78944325-7b8a-46c5-a2e3-0f1be5dc6c99", 00:23:55.848 "is_configured": true, 00:23:55.848 "data_offset": 0, 00:23:55.848 "data_size": 65536 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "name": "BaseBdev2", 00:23:55.848 "uuid": "4df5ff56-6be9-4672-9705-821926ea1d46", 00:23:55.848 "is_configured": true, 00:23:55.848 "data_offset": 0, 00:23:55.848 "data_size": 65536 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "name": "BaseBdev3", 00:23:55.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.848 "is_configured": false, 00:23:55.848 "data_offset": 0, 00:23:55.848 "data_size": 0 00:23:55.848 }, 00:23:55.848 { 00:23:55.848 "name": "BaseBdev4", 00:23:55.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.848 "is_configured": false, 00:23:55.848 "data_offset": 0, 00:23:55.848 "data_size": 0 00:23:55.848 } 00:23:55.848 ] 00:23:55.848 }' 00:23:55.848 11:35:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.848 11:35:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.784 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:56.784 [2024-07-13 11:35:31.385732] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:56.784 BaseBdev3 00:23:56.784 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:56.784 11:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:56.784 11:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:56.784 11:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:56.784 11:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:56.784 11:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:56.784 11:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:57.042 [ 00:23:57.042 { 00:23:57.042 "name": "BaseBdev3", 00:23:57.042 "aliases": [ 00:23:57.042 "8809f6e3-c5f0-4352-bd6b-15e619b8f938" 00:23:57.042 ], 00:23:57.042 "product_name": "Malloc disk", 00:23:57.042 "block_size": 512, 00:23:57.042 "num_blocks": 65536, 00:23:57.042 "uuid": "8809f6e3-c5f0-4352-bd6b-15e619b8f938", 00:23:57.042 "assigned_rate_limits": { 00:23:57.042 "rw_ios_per_sec": 0, 00:23:57.042 "rw_mbytes_per_sec": 0, 00:23:57.042 "r_mbytes_per_sec": 0, 00:23:57.042 "w_mbytes_per_sec": 0 00:23:57.042 }, 00:23:57.042 "claimed": true, 00:23:57.042 "claim_type": "exclusive_write", 00:23:57.042 "zoned": false, 00:23:57.042 "supported_io_types": { 00:23:57.042 "read": true, 00:23:57.042 "write": true, 00:23:57.042 "unmap": true, 00:23:57.042 "flush": true, 00:23:57.042 "reset": true, 00:23:57.042 "nvme_admin": false, 00:23:57.042 "nvme_io": false, 00:23:57.042 "nvme_io_md": false, 00:23:57.042 "write_zeroes": true, 00:23:57.042 "zcopy": true, 00:23:57.042 "get_zone_info": false, 00:23:57.042 "zone_management": false, 00:23:57.042 "zone_append": false, 00:23:57.042 "compare": false, 00:23:57.042 "compare_and_write": false, 00:23:57.042 "abort": true, 00:23:57.042 "seek_hole": false, 00:23:57.042 "seek_data": false, 00:23:57.042 "copy": true, 00:23:57.042 "nvme_iov_md": false 00:23:57.042 }, 00:23:57.042 "memory_domains": [ 00:23:57.042 { 00:23:57.042 "dma_device_id": "system", 00:23:57.042 "dma_device_type": 1 00:23:57.042 }, 00:23:57.042 { 00:23:57.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.042 "dma_device_type": 2 00:23:57.042 } 00:23:57.042 ], 00:23:57.042 "driver_specific": {} 00:23:57.042 } 00:23:57.042 ] 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.042 11:35:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.300 11:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.300 "name": "Existed_Raid", 00:23:57.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.300 "strip_size_kb": 64, 00:23:57.300 "state": "configuring", 00:23:57.300 "raid_level": "concat", 00:23:57.300 "superblock": false, 00:23:57.300 "num_base_bdevs": 4, 00:23:57.300 "num_base_bdevs_discovered": 3, 00:23:57.300 "num_base_bdevs_operational": 4, 00:23:57.300 "base_bdevs_list": [ 00:23:57.300 { 00:23:57.300 "name": "BaseBdev1", 00:23:57.300 "uuid": "78944325-7b8a-46c5-a2e3-0f1be5dc6c99", 00:23:57.300 "is_configured": true, 00:23:57.300 "data_offset": 0, 00:23:57.300 "data_size": 65536 00:23:57.300 }, 00:23:57.300 { 00:23:57.300 "name": "BaseBdev2", 00:23:57.300 "uuid": "4df5ff56-6be9-4672-9705-821926ea1d46", 00:23:57.300 "is_configured": true, 00:23:57.300 "data_offset": 0, 00:23:57.300 "data_size": 65536 00:23:57.300 }, 00:23:57.300 { 00:23:57.300 "name": "BaseBdev3", 00:23:57.300 "uuid": "8809f6e3-c5f0-4352-bd6b-15e619b8f938", 00:23:57.300 "is_configured": true, 00:23:57.300 "data_offset": 0, 00:23:57.300 "data_size": 65536 00:23:57.300 }, 00:23:57.300 { 00:23:57.300 "name": "BaseBdev4", 00:23:57.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.300 "is_configured": false, 00:23:57.300 "data_offset": 0, 00:23:57.300 "data_size": 0 00:23:57.300 } 00:23:57.300 ] 00:23:57.300 }' 00:23:57.300 11:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.300 11:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.232 11:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:58.232 [2024-07-13 11:35:32.933062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:58.232 [2024-07-13 11:35:32.933258] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:23:58.232 [2024-07-13 11:35:32.933297] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:58.232 [2024-07-13 11:35:32.933514] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:58.232 [2024-07-13 11:35:32.933978] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:23:58.232 [2024-07-13 11:35:32.934099] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:23:58.232 [2024-07-13 11:35:32.934425] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:58.232 BaseBdev4 00:23:58.232 11:35:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:58.232 11:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:58.232 11:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:58.232 11:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:58.232 11:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:58.232 11:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:58.232 11:35:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:58.489 11:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:58.746 [ 00:23:58.746 { 00:23:58.746 "name": "BaseBdev4", 00:23:58.746 "aliases": [ 00:23:58.746 "dfa42fa4-abd5-481e-87b3-78566b616b9a" 00:23:58.746 ], 00:23:58.746 "product_name": "Malloc disk", 00:23:58.746 "block_size": 512, 00:23:58.746 "num_blocks": 65536, 00:23:58.746 "uuid": "dfa42fa4-abd5-481e-87b3-78566b616b9a", 00:23:58.746 "assigned_rate_limits": { 00:23:58.746 "rw_ios_per_sec": 0, 00:23:58.746 "rw_mbytes_per_sec": 0, 00:23:58.746 "r_mbytes_per_sec": 0, 00:23:58.746 "w_mbytes_per_sec": 0 00:23:58.746 }, 00:23:58.746 "claimed": true, 00:23:58.746 "claim_type": "exclusive_write", 00:23:58.746 "zoned": false, 00:23:58.746 "supported_io_types": { 00:23:58.746 "read": true, 00:23:58.746 "write": true, 00:23:58.746 "unmap": true, 00:23:58.746 "flush": true, 00:23:58.746 "reset": true, 00:23:58.746 "nvme_admin": false, 00:23:58.746 "nvme_io": false, 00:23:58.746 "nvme_io_md": false, 00:23:58.746 "write_zeroes": true, 00:23:58.746 "zcopy": true, 00:23:58.746 "get_zone_info": false, 00:23:58.746 "zone_management": false, 00:23:58.746 "zone_append": false, 00:23:58.746 "compare": false, 00:23:58.746 "compare_and_write": false, 00:23:58.746 "abort": true, 00:23:58.746 "seek_hole": false, 00:23:58.746 "seek_data": false, 00:23:58.746 "copy": true, 00:23:58.746 "nvme_iov_md": false 00:23:58.746 }, 00:23:58.746 "memory_domains": [ 00:23:58.746 { 00:23:58.746 "dma_device_id": "system", 00:23:58.746 "dma_device_type": 1 00:23:58.746 }, 00:23:58.746 { 00:23:58.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.746 "dma_device_type": 2 00:23:58.746 } 00:23:58.746 ], 00:23:58.746 "driver_specific": {} 00:23:58.746 } 00:23:58.746 ] 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.746 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.004 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.004 "name": "Existed_Raid", 00:23:59.004 "uuid": "5a8da45e-6ad4-496e-a8f7-07e80e06d4be", 00:23:59.004 "strip_size_kb": 64, 00:23:59.004 "state": "online", 00:23:59.004 "raid_level": "concat", 00:23:59.004 "superblock": false, 00:23:59.004 "num_base_bdevs": 4, 00:23:59.004 "num_base_bdevs_discovered": 4, 00:23:59.004 "num_base_bdevs_operational": 4, 00:23:59.004 "base_bdevs_list": [ 00:23:59.004 { 00:23:59.004 "name": "BaseBdev1", 00:23:59.004 "uuid": "78944325-7b8a-46c5-a2e3-0f1be5dc6c99", 00:23:59.004 "is_configured": true, 00:23:59.004 "data_offset": 0, 00:23:59.004 "data_size": 65536 00:23:59.004 }, 00:23:59.004 { 00:23:59.004 "name": "BaseBdev2", 00:23:59.004 "uuid": "4df5ff56-6be9-4672-9705-821926ea1d46", 00:23:59.004 "is_configured": true, 00:23:59.004 "data_offset": 0, 00:23:59.004 "data_size": 65536 00:23:59.004 }, 00:23:59.004 { 00:23:59.004 "name": "BaseBdev3", 00:23:59.004 "uuid": "8809f6e3-c5f0-4352-bd6b-15e619b8f938", 00:23:59.004 "is_configured": true, 00:23:59.004 "data_offset": 0, 00:23:59.004 "data_size": 65536 00:23:59.004 }, 00:23:59.004 { 00:23:59.004 "name": "BaseBdev4", 00:23:59.004 "uuid": "dfa42fa4-abd5-481e-87b3-78566b616b9a", 00:23:59.004 "is_configured": true, 00:23:59.004 "data_offset": 0, 00:23:59.004 "data_size": 65536 00:23:59.004 } 00:23:59.004 ] 00:23:59.004 }' 00:23:59.004 11:35:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.004 11:35:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.569 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:59.569 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:59.569 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:59.569 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:59.569 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:59.569 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:59.569 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:59.569 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:59.827 [2024-07-13 11:35:34.498913] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.828 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:59.828 "name": "Existed_Raid", 00:23:59.828 "aliases": [ 00:23:59.828 "5a8da45e-6ad4-496e-a8f7-07e80e06d4be" 00:23:59.828 ], 00:23:59.828 "product_name": "Raid Volume", 00:23:59.828 "block_size": 512, 00:23:59.828 "num_blocks": 262144, 00:23:59.828 "uuid": "5a8da45e-6ad4-496e-a8f7-07e80e06d4be", 00:23:59.828 "assigned_rate_limits": { 00:23:59.828 "rw_ios_per_sec": 0, 00:23:59.828 "rw_mbytes_per_sec": 0, 00:23:59.828 "r_mbytes_per_sec": 0, 00:23:59.828 "w_mbytes_per_sec": 0 00:23:59.828 }, 00:23:59.828 "claimed": false, 00:23:59.828 "zoned": false, 00:23:59.828 "supported_io_types": { 00:23:59.828 "read": true, 00:23:59.828 "write": true, 00:23:59.828 "unmap": true, 00:23:59.828 "flush": true, 00:23:59.828 "reset": true, 00:23:59.828 "nvme_admin": false, 00:23:59.828 "nvme_io": false, 00:23:59.828 "nvme_io_md": false, 00:23:59.828 "write_zeroes": true, 00:23:59.828 "zcopy": false, 00:23:59.828 "get_zone_info": false, 00:23:59.828 "zone_management": false, 00:23:59.828 "zone_append": false, 00:23:59.828 "compare": false, 00:23:59.828 "compare_and_write": false, 00:23:59.828 "abort": false, 00:23:59.828 "seek_hole": false, 00:23:59.828 "seek_data": false, 00:23:59.828 "copy": false, 00:23:59.828 "nvme_iov_md": false 00:23:59.828 }, 00:23:59.828 "memory_domains": [ 00:23:59.828 { 00:23:59.828 "dma_device_id": "system", 00:23:59.828 "dma_device_type": 1 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.828 "dma_device_type": 2 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "dma_device_id": "system", 00:23:59.828 "dma_device_type": 1 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.828 "dma_device_type": 2 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "dma_device_id": "system", 00:23:59.828 "dma_device_type": 1 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.828 "dma_device_type": 2 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "dma_device_id": "system", 00:23:59.828 "dma_device_type": 1 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:59.828 "dma_device_type": 2 00:23:59.828 } 00:23:59.828 ], 00:23:59.828 "driver_specific": { 00:23:59.828 "raid": { 00:23:59.828 "uuid": "5a8da45e-6ad4-496e-a8f7-07e80e06d4be", 00:23:59.828 "strip_size_kb": 64, 00:23:59.828 "state": "online", 00:23:59.828 "raid_level": "concat", 00:23:59.828 "superblock": false, 00:23:59.828 "num_base_bdevs": 4, 00:23:59.828 "num_base_bdevs_discovered": 4, 00:23:59.828 "num_base_bdevs_operational": 4, 00:23:59.828 "base_bdevs_list": [ 00:23:59.828 { 00:23:59.828 "name": "BaseBdev1", 00:23:59.828 "uuid": "78944325-7b8a-46c5-a2e3-0f1be5dc6c99", 00:23:59.828 "is_configured": true, 00:23:59.828 "data_offset": 0, 00:23:59.828 "data_size": 65536 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "name": "BaseBdev2", 00:23:59.828 "uuid": "4df5ff56-6be9-4672-9705-821926ea1d46", 00:23:59.828 "is_configured": true, 00:23:59.828 "data_offset": 0, 00:23:59.828 "data_size": 65536 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "name": "BaseBdev3", 00:23:59.828 "uuid": "8809f6e3-c5f0-4352-bd6b-15e619b8f938", 00:23:59.828 "is_configured": true, 00:23:59.828 "data_offset": 0, 00:23:59.828 "data_size": 65536 00:23:59.828 }, 00:23:59.828 { 00:23:59.828 "name": "BaseBdev4", 00:23:59.828 "uuid": "dfa42fa4-abd5-481e-87b3-78566b616b9a", 00:23:59.828 "is_configured": true, 00:23:59.828 "data_offset": 0, 00:23:59.828 "data_size": 65536 00:23:59.828 } 00:23:59.828 ] 00:23:59.828 } 00:23:59.828 } 00:23:59.828 }' 00:23:59.828 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:59.828 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:59.828 BaseBdev2 00:23:59.828 BaseBdev3 00:23:59.828 BaseBdev4' 00:23:59.828 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:59.828 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:59.828 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:00.086 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:00.086 "name": "BaseBdev1", 00:24:00.086 "aliases": [ 00:24:00.086 "78944325-7b8a-46c5-a2e3-0f1be5dc6c99" 00:24:00.086 ], 00:24:00.086 "product_name": "Malloc disk", 00:24:00.086 "block_size": 512, 00:24:00.086 "num_blocks": 65536, 00:24:00.086 "uuid": "78944325-7b8a-46c5-a2e3-0f1be5dc6c99", 00:24:00.086 "assigned_rate_limits": { 00:24:00.086 "rw_ios_per_sec": 0, 00:24:00.086 "rw_mbytes_per_sec": 0, 00:24:00.086 "r_mbytes_per_sec": 0, 00:24:00.086 "w_mbytes_per_sec": 0 00:24:00.086 }, 00:24:00.086 "claimed": true, 00:24:00.086 "claim_type": "exclusive_write", 00:24:00.086 "zoned": false, 00:24:00.086 "supported_io_types": { 00:24:00.086 "read": true, 00:24:00.086 "write": true, 00:24:00.086 "unmap": true, 00:24:00.086 "flush": true, 00:24:00.086 "reset": true, 00:24:00.086 "nvme_admin": false, 00:24:00.086 "nvme_io": false, 00:24:00.086 "nvme_io_md": false, 00:24:00.086 "write_zeroes": true, 00:24:00.086 "zcopy": true, 00:24:00.086 "get_zone_info": false, 00:24:00.086 "zone_management": false, 00:24:00.086 "zone_append": false, 00:24:00.086 "compare": false, 00:24:00.086 "compare_and_write": false, 00:24:00.086 "abort": true, 00:24:00.086 "seek_hole": false, 00:24:00.086 "seek_data": false, 00:24:00.086 "copy": true, 00:24:00.086 "nvme_iov_md": false 00:24:00.086 }, 00:24:00.086 "memory_domains": [ 00:24:00.086 { 00:24:00.086 "dma_device_id": "system", 00:24:00.086 "dma_device_type": 1 00:24:00.086 }, 00:24:00.086 { 00:24:00.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.086 "dma_device_type": 2 00:24:00.086 } 00:24:00.086 ], 00:24:00.086 "driver_specific": {} 00:24:00.086 }' 00:24:00.086 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.344 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.344 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:00.344 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.344 11:35:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:00.344 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:00.345 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.603 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:00.603 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:00.603 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.603 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:00.603 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:00.603 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:00.603 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:00.603 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:00.861 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:00.861 "name": "BaseBdev2", 00:24:00.861 "aliases": [ 00:24:00.861 "4df5ff56-6be9-4672-9705-821926ea1d46" 00:24:00.861 ], 00:24:00.861 "product_name": "Malloc disk", 00:24:00.861 "block_size": 512, 00:24:00.861 "num_blocks": 65536, 00:24:00.861 "uuid": "4df5ff56-6be9-4672-9705-821926ea1d46", 00:24:00.861 "assigned_rate_limits": { 00:24:00.861 "rw_ios_per_sec": 0, 00:24:00.861 "rw_mbytes_per_sec": 0, 00:24:00.861 "r_mbytes_per_sec": 0, 00:24:00.861 "w_mbytes_per_sec": 0 00:24:00.861 }, 00:24:00.861 "claimed": true, 00:24:00.861 "claim_type": "exclusive_write", 00:24:00.861 "zoned": false, 00:24:00.861 "supported_io_types": { 00:24:00.861 "read": true, 00:24:00.861 "write": true, 00:24:00.861 "unmap": true, 00:24:00.861 "flush": true, 00:24:00.861 "reset": true, 00:24:00.861 "nvme_admin": false, 00:24:00.861 "nvme_io": false, 00:24:00.861 "nvme_io_md": false, 00:24:00.861 "write_zeroes": true, 00:24:00.861 "zcopy": true, 00:24:00.861 "get_zone_info": false, 00:24:00.861 "zone_management": false, 00:24:00.861 "zone_append": false, 00:24:00.861 "compare": false, 00:24:00.861 "compare_and_write": false, 00:24:00.861 "abort": true, 00:24:00.861 "seek_hole": false, 00:24:00.861 "seek_data": false, 00:24:00.861 "copy": true, 00:24:00.861 "nvme_iov_md": false 00:24:00.861 }, 00:24:00.861 "memory_domains": [ 00:24:00.861 { 00:24:00.861 "dma_device_id": "system", 00:24:00.861 "dma_device_type": 1 00:24:00.861 }, 00:24:00.861 { 00:24:00.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:00.861 "dma_device_type": 2 00:24:00.861 } 00:24:00.861 ], 00:24:00.861 "driver_specific": {} 00:24:00.861 }' 00:24:00.861 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:00.861 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.120 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:01.120 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.120 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.120 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:01.120 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.120 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.378 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:01.378 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.378 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:01.378 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:01.378 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:01.378 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:01.378 11:35:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:01.636 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:01.636 "name": "BaseBdev3", 00:24:01.636 "aliases": [ 00:24:01.636 "8809f6e3-c5f0-4352-bd6b-15e619b8f938" 00:24:01.636 ], 00:24:01.636 "product_name": "Malloc disk", 00:24:01.636 "block_size": 512, 00:24:01.636 "num_blocks": 65536, 00:24:01.636 "uuid": "8809f6e3-c5f0-4352-bd6b-15e619b8f938", 00:24:01.636 "assigned_rate_limits": { 00:24:01.636 "rw_ios_per_sec": 0, 00:24:01.636 "rw_mbytes_per_sec": 0, 00:24:01.636 "r_mbytes_per_sec": 0, 00:24:01.636 "w_mbytes_per_sec": 0 00:24:01.636 }, 00:24:01.636 "claimed": true, 00:24:01.636 "claim_type": "exclusive_write", 00:24:01.636 "zoned": false, 00:24:01.636 "supported_io_types": { 00:24:01.636 "read": true, 00:24:01.636 "write": true, 00:24:01.636 "unmap": true, 00:24:01.636 "flush": true, 00:24:01.636 "reset": true, 00:24:01.636 "nvme_admin": false, 00:24:01.636 "nvme_io": false, 00:24:01.636 "nvme_io_md": false, 00:24:01.636 "write_zeroes": true, 00:24:01.636 "zcopy": true, 00:24:01.636 "get_zone_info": false, 00:24:01.636 "zone_management": false, 00:24:01.636 "zone_append": false, 00:24:01.636 "compare": false, 00:24:01.636 "compare_and_write": false, 00:24:01.636 "abort": true, 00:24:01.636 "seek_hole": false, 00:24:01.636 "seek_data": false, 00:24:01.636 "copy": true, 00:24:01.636 "nvme_iov_md": false 00:24:01.636 }, 00:24:01.636 "memory_domains": [ 00:24:01.636 { 00:24:01.636 "dma_device_id": "system", 00:24:01.636 "dma_device_type": 1 00:24:01.636 }, 00:24:01.636 { 00:24:01.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:01.636 "dma_device_type": 2 00:24:01.636 } 00:24:01.636 ], 00:24:01.636 "driver_specific": {} 00:24:01.636 }' 00:24:01.636 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.636 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:01.636 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:01.636 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.894 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:01.895 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:01.895 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.895 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:01.895 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:01.895 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.153 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.153 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:02.153 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:02.153 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:02.153 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:02.411 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:02.411 "name": "BaseBdev4", 00:24:02.411 "aliases": [ 00:24:02.411 "dfa42fa4-abd5-481e-87b3-78566b616b9a" 00:24:02.411 ], 00:24:02.411 "product_name": "Malloc disk", 00:24:02.411 "block_size": 512, 00:24:02.411 "num_blocks": 65536, 00:24:02.411 "uuid": "dfa42fa4-abd5-481e-87b3-78566b616b9a", 00:24:02.411 "assigned_rate_limits": { 00:24:02.411 "rw_ios_per_sec": 0, 00:24:02.411 "rw_mbytes_per_sec": 0, 00:24:02.411 "r_mbytes_per_sec": 0, 00:24:02.411 "w_mbytes_per_sec": 0 00:24:02.411 }, 00:24:02.411 "claimed": true, 00:24:02.411 "claim_type": "exclusive_write", 00:24:02.411 "zoned": false, 00:24:02.411 "supported_io_types": { 00:24:02.411 "read": true, 00:24:02.411 "write": true, 00:24:02.411 "unmap": true, 00:24:02.411 "flush": true, 00:24:02.411 "reset": true, 00:24:02.411 "nvme_admin": false, 00:24:02.411 "nvme_io": false, 00:24:02.411 "nvme_io_md": false, 00:24:02.411 "write_zeroes": true, 00:24:02.411 "zcopy": true, 00:24:02.411 "get_zone_info": false, 00:24:02.411 "zone_management": false, 00:24:02.411 "zone_append": false, 00:24:02.411 "compare": false, 00:24:02.411 "compare_and_write": false, 00:24:02.411 "abort": true, 00:24:02.411 "seek_hole": false, 00:24:02.411 "seek_data": false, 00:24:02.411 "copy": true, 00:24:02.411 "nvme_iov_md": false 00:24:02.411 }, 00:24:02.411 "memory_domains": [ 00:24:02.411 { 00:24:02.411 "dma_device_id": "system", 00:24:02.411 "dma_device_type": 1 00:24:02.411 }, 00:24:02.411 { 00:24:02.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.411 "dma_device_type": 2 00:24:02.411 } 00:24:02.411 ], 00:24:02.411 "driver_specific": {} 00:24:02.411 }' 00:24:02.411 11:35:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.411 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:02.411 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:02.411 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.670 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:02.670 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:02.670 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.670 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:02.670 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:02.670 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.670 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:02.928 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:02.928 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:03.187 [2024-07-13 11:35:37.683333] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:03.187 [2024-07-13 11:35:37.683476] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:03.187 [2024-07-13 11:35:37.683624] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.187 11:35:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.445 11:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.445 "name": "Existed_Raid", 00:24:03.445 "uuid": "5a8da45e-6ad4-496e-a8f7-07e80e06d4be", 00:24:03.445 "strip_size_kb": 64, 00:24:03.445 "state": "offline", 00:24:03.445 "raid_level": "concat", 00:24:03.445 "superblock": false, 00:24:03.445 "num_base_bdevs": 4, 00:24:03.445 "num_base_bdevs_discovered": 3, 00:24:03.445 "num_base_bdevs_operational": 3, 00:24:03.445 "base_bdevs_list": [ 00:24:03.445 { 00:24:03.445 "name": null, 00:24:03.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.445 "is_configured": false, 00:24:03.445 "data_offset": 0, 00:24:03.445 "data_size": 65536 00:24:03.445 }, 00:24:03.445 { 00:24:03.445 "name": "BaseBdev2", 00:24:03.445 "uuid": "4df5ff56-6be9-4672-9705-821926ea1d46", 00:24:03.445 "is_configured": true, 00:24:03.445 "data_offset": 0, 00:24:03.445 "data_size": 65536 00:24:03.445 }, 00:24:03.445 { 00:24:03.445 "name": "BaseBdev3", 00:24:03.445 "uuid": "8809f6e3-c5f0-4352-bd6b-15e619b8f938", 00:24:03.445 "is_configured": true, 00:24:03.445 "data_offset": 0, 00:24:03.445 "data_size": 65536 00:24:03.445 }, 00:24:03.445 { 00:24:03.445 "name": "BaseBdev4", 00:24:03.445 "uuid": "dfa42fa4-abd5-481e-87b3-78566b616b9a", 00:24:03.445 "is_configured": true, 00:24:03.445 "data_offset": 0, 00:24:03.445 "data_size": 65536 00:24:03.445 } 00:24:03.445 ] 00:24:03.445 }' 00:24:03.445 11:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.445 11:35:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.012 11:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:04.012 11:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:04.012 11:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.012 11:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:04.270 11:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:04.270 11:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:04.270 11:35:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:04.529 [2024-07-13 11:35:39.111836] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:04.529 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:04.529 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:04.529 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.529 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:04.787 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:04.787 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:04.787 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:05.046 [2024-07-13 11:35:39.595556] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:05.046 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:05.046 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:05.046 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.046 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:05.304 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:05.304 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:05.304 11:35:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:05.304 [2024-07-13 11:35:40.042366] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:05.304 [2024-07-13 11:35:40.042606] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:24:05.562 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:05.562 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:05.562 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.562 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:05.821 BaseBdev2 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:05.821 11:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:06.080 11:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:06.339 [ 00:24:06.339 { 00:24:06.339 "name": "BaseBdev2", 00:24:06.339 "aliases": [ 00:24:06.339 "ff247d89-ccba-4295-9dc9-e30fc2e68bea" 00:24:06.339 ], 00:24:06.339 "product_name": "Malloc disk", 00:24:06.339 "block_size": 512, 00:24:06.339 "num_blocks": 65536, 00:24:06.339 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:06.339 "assigned_rate_limits": { 00:24:06.339 "rw_ios_per_sec": 0, 00:24:06.339 "rw_mbytes_per_sec": 0, 00:24:06.339 "r_mbytes_per_sec": 0, 00:24:06.339 "w_mbytes_per_sec": 0 00:24:06.339 }, 00:24:06.339 "claimed": false, 00:24:06.339 "zoned": false, 00:24:06.339 "supported_io_types": { 00:24:06.339 "read": true, 00:24:06.339 "write": true, 00:24:06.339 "unmap": true, 00:24:06.339 "flush": true, 00:24:06.339 "reset": true, 00:24:06.339 "nvme_admin": false, 00:24:06.339 "nvme_io": false, 00:24:06.339 "nvme_io_md": false, 00:24:06.339 "write_zeroes": true, 00:24:06.339 "zcopy": true, 00:24:06.339 "get_zone_info": false, 00:24:06.339 "zone_management": false, 00:24:06.339 "zone_append": false, 00:24:06.339 "compare": false, 00:24:06.339 "compare_and_write": false, 00:24:06.339 "abort": true, 00:24:06.339 "seek_hole": false, 00:24:06.339 "seek_data": false, 00:24:06.339 "copy": true, 00:24:06.339 "nvme_iov_md": false 00:24:06.339 }, 00:24:06.339 "memory_domains": [ 00:24:06.339 { 00:24:06.339 "dma_device_id": "system", 00:24:06.339 "dma_device_type": 1 00:24:06.339 }, 00:24:06.339 { 00:24:06.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.339 "dma_device_type": 2 00:24:06.339 } 00:24:06.339 ], 00:24:06.339 "driver_specific": {} 00:24:06.339 } 00:24:06.339 ] 00:24:06.339 11:35:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:06.339 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:06.339 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:06.339 11:35:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:06.598 BaseBdev3 00:24:06.598 11:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:06.598 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:06.598 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:06.598 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:06.598 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:06.598 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:06.598 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:06.598 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:06.857 [ 00:24:06.857 { 00:24:06.857 "name": "BaseBdev3", 00:24:06.857 "aliases": [ 00:24:06.857 "3a84784d-a759-46d4-921b-2541303a23ef" 00:24:06.857 ], 00:24:06.857 "product_name": "Malloc disk", 00:24:06.857 "block_size": 512, 00:24:06.857 "num_blocks": 65536, 00:24:06.857 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:06.857 "assigned_rate_limits": { 00:24:06.857 "rw_ios_per_sec": 0, 00:24:06.857 "rw_mbytes_per_sec": 0, 00:24:06.857 "r_mbytes_per_sec": 0, 00:24:06.857 "w_mbytes_per_sec": 0 00:24:06.857 }, 00:24:06.857 "claimed": false, 00:24:06.857 "zoned": false, 00:24:06.857 "supported_io_types": { 00:24:06.857 "read": true, 00:24:06.857 "write": true, 00:24:06.857 "unmap": true, 00:24:06.857 "flush": true, 00:24:06.857 "reset": true, 00:24:06.857 "nvme_admin": false, 00:24:06.857 "nvme_io": false, 00:24:06.857 "nvme_io_md": false, 00:24:06.857 "write_zeroes": true, 00:24:06.857 "zcopy": true, 00:24:06.857 "get_zone_info": false, 00:24:06.857 "zone_management": false, 00:24:06.857 "zone_append": false, 00:24:06.857 "compare": false, 00:24:06.857 "compare_and_write": false, 00:24:06.857 "abort": true, 00:24:06.857 "seek_hole": false, 00:24:06.857 "seek_data": false, 00:24:06.857 "copy": true, 00:24:06.857 "nvme_iov_md": false 00:24:06.857 }, 00:24:06.857 "memory_domains": [ 00:24:06.857 { 00:24:06.857 "dma_device_id": "system", 00:24:06.857 "dma_device_type": 1 00:24:06.857 }, 00:24:06.857 { 00:24:06.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.857 "dma_device_type": 2 00:24:06.857 } 00:24:06.857 ], 00:24:06.857 "driver_specific": {} 00:24:06.857 } 00:24:06.857 ] 00:24:06.857 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:06.857 11:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:06.857 11:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:06.857 11:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:07.116 BaseBdev4 00:24:07.116 11:35:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:07.116 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:07.116 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:07.116 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:07.116 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:07.116 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:07.116 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:07.374 11:35:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:07.633 [ 00:24:07.633 { 00:24:07.633 "name": "BaseBdev4", 00:24:07.633 "aliases": [ 00:24:07.633 "853c9c4b-13cd-4a4d-abcf-16eaea6a4749" 00:24:07.633 ], 00:24:07.633 "product_name": "Malloc disk", 00:24:07.633 "block_size": 512, 00:24:07.633 "num_blocks": 65536, 00:24:07.633 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:07.633 "assigned_rate_limits": { 00:24:07.633 "rw_ios_per_sec": 0, 00:24:07.633 "rw_mbytes_per_sec": 0, 00:24:07.633 "r_mbytes_per_sec": 0, 00:24:07.633 "w_mbytes_per_sec": 0 00:24:07.633 }, 00:24:07.633 "claimed": false, 00:24:07.633 "zoned": false, 00:24:07.633 "supported_io_types": { 00:24:07.633 "read": true, 00:24:07.633 "write": true, 00:24:07.633 "unmap": true, 00:24:07.633 "flush": true, 00:24:07.633 "reset": true, 00:24:07.633 "nvme_admin": false, 00:24:07.633 "nvme_io": false, 00:24:07.633 "nvme_io_md": false, 00:24:07.633 "write_zeroes": true, 00:24:07.633 "zcopy": true, 00:24:07.633 "get_zone_info": false, 00:24:07.633 "zone_management": false, 00:24:07.633 "zone_append": false, 00:24:07.633 "compare": false, 00:24:07.633 "compare_and_write": false, 00:24:07.633 "abort": true, 00:24:07.633 "seek_hole": false, 00:24:07.633 "seek_data": false, 00:24:07.633 "copy": true, 00:24:07.633 "nvme_iov_md": false 00:24:07.633 }, 00:24:07.633 "memory_domains": [ 00:24:07.633 { 00:24:07.633 "dma_device_id": "system", 00:24:07.633 "dma_device_type": 1 00:24:07.633 }, 00:24:07.633 { 00:24:07.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.633 "dma_device_type": 2 00:24:07.633 } 00:24:07.633 ], 00:24:07.633 "driver_specific": {} 00:24:07.633 } 00:24:07.633 ] 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:07.633 [2024-07-13 11:35:42.329288] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:07.633 [2024-07-13 11:35:42.329511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:07.633 [2024-07-13 11:35:42.329630] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:07.633 [2024-07-13 11:35:42.331308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:07.633 [2024-07-13 11:35:42.331495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:07.633 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:07.634 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:07.634 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:07.634 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:07.634 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:07.892 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:07.892 "name": "Existed_Raid", 00:24:07.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.892 "strip_size_kb": 64, 00:24:07.892 "state": "configuring", 00:24:07.892 "raid_level": "concat", 00:24:07.892 "superblock": false, 00:24:07.892 "num_base_bdevs": 4, 00:24:07.892 "num_base_bdevs_discovered": 3, 00:24:07.892 "num_base_bdevs_operational": 4, 00:24:07.892 "base_bdevs_list": [ 00:24:07.892 { 00:24:07.892 "name": "BaseBdev1", 00:24:07.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.892 "is_configured": false, 00:24:07.892 "data_offset": 0, 00:24:07.892 "data_size": 0 00:24:07.892 }, 00:24:07.892 { 00:24:07.892 "name": "BaseBdev2", 00:24:07.892 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:07.892 "is_configured": true, 00:24:07.892 "data_offset": 0, 00:24:07.892 "data_size": 65536 00:24:07.892 }, 00:24:07.892 { 00:24:07.892 "name": "BaseBdev3", 00:24:07.892 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:07.892 "is_configured": true, 00:24:07.892 "data_offset": 0, 00:24:07.892 "data_size": 65536 00:24:07.892 }, 00:24:07.892 { 00:24:07.892 "name": "BaseBdev4", 00:24:07.892 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:07.892 "is_configured": true, 00:24:07.892 "data_offset": 0, 00:24:07.892 "data_size": 65536 00:24:07.892 } 00:24:07.892 ] 00:24:07.892 }' 00:24:07.892 11:35:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:07.892 11:35:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:08.827 [2024-07-13 11:35:43.509442] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.827 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:09.090 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:09.090 "name": "Existed_Raid", 00:24:09.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.090 "strip_size_kb": 64, 00:24:09.090 "state": "configuring", 00:24:09.090 "raid_level": "concat", 00:24:09.090 "superblock": false, 00:24:09.090 "num_base_bdevs": 4, 00:24:09.090 "num_base_bdevs_discovered": 2, 00:24:09.090 "num_base_bdevs_operational": 4, 00:24:09.091 "base_bdevs_list": [ 00:24:09.091 { 00:24:09.091 "name": "BaseBdev1", 00:24:09.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:09.091 "is_configured": false, 00:24:09.091 "data_offset": 0, 00:24:09.091 "data_size": 0 00:24:09.091 }, 00:24:09.091 { 00:24:09.091 "name": null, 00:24:09.091 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:09.091 "is_configured": false, 00:24:09.091 "data_offset": 0, 00:24:09.091 "data_size": 65536 00:24:09.091 }, 00:24:09.091 { 00:24:09.091 "name": "BaseBdev3", 00:24:09.091 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:09.091 "is_configured": true, 00:24:09.091 "data_offset": 0, 00:24:09.091 "data_size": 65536 00:24:09.091 }, 00:24:09.091 { 00:24:09.091 "name": "BaseBdev4", 00:24:09.091 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:09.091 "is_configured": true, 00:24:09.091 "data_offset": 0, 00:24:09.091 "data_size": 65536 00:24:09.091 } 00:24:09.091 ] 00:24:09.091 }' 00:24:09.091 11:35:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:09.091 11:35:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.716 11:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.716 11:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:10.023 11:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:10.023 11:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:10.282 [2024-07-13 11:35:44.899057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:10.282 BaseBdev1 00:24:10.282 11:35:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:10.282 11:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:10.282 11:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:10.282 11:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:10.282 11:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:10.282 11:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:10.282 11:35:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:10.540 11:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:10.798 [ 00:24:10.798 { 00:24:10.798 "name": "BaseBdev1", 00:24:10.798 "aliases": [ 00:24:10.798 "948d7c22-f7f3-43a3-961c-c4bb7111213c" 00:24:10.798 ], 00:24:10.798 "product_name": "Malloc disk", 00:24:10.798 "block_size": 512, 00:24:10.798 "num_blocks": 65536, 00:24:10.798 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:10.798 "assigned_rate_limits": { 00:24:10.798 "rw_ios_per_sec": 0, 00:24:10.798 "rw_mbytes_per_sec": 0, 00:24:10.798 "r_mbytes_per_sec": 0, 00:24:10.798 "w_mbytes_per_sec": 0 00:24:10.798 }, 00:24:10.798 "claimed": true, 00:24:10.798 "claim_type": "exclusive_write", 00:24:10.798 "zoned": false, 00:24:10.798 "supported_io_types": { 00:24:10.798 "read": true, 00:24:10.798 "write": true, 00:24:10.798 "unmap": true, 00:24:10.798 "flush": true, 00:24:10.798 "reset": true, 00:24:10.798 "nvme_admin": false, 00:24:10.798 "nvme_io": false, 00:24:10.798 "nvme_io_md": false, 00:24:10.798 "write_zeroes": true, 00:24:10.798 "zcopy": true, 00:24:10.798 "get_zone_info": false, 00:24:10.798 "zone_management": false, 00:24:10.798 "zone_append": false, 00:24:10.798 "compare": false, 00:24:10.798 "compare_and_write": false, 00:24:10.798 "abort": true, 00:24:10.798 "seek_hole": false, 00:24:10.798 "seek_data": false, 00:24:10.798 "copy": true, 00:24:10.798 "nvme_iov_md": false 00:24:10.798 }, 00:24:10.798 "memory_domains": [ 00:24:10.798 { 00:24:10.798 "dma_device_id": "system", 00:24:10.798 "dma_device_type": 1 00:24:10.798 }, 00:24:10.798 { 00:24:10.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:10.798 "dma_device_type": 2 00:24:10.798 } 00:24:10.798 ], 00:24:10.798 "driver_specific": {} 00:24:10.798 } 00:24:10.798 ] 00:24:10.798 11:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:10.798 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:10.798 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:10.798 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:10.799 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:10.799 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:10.799 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:10.799 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.799 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.799 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.799 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.799 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.799 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:11.058 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:11.058 "name": "Existed_Raid", 00:24:11.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:11.058 "strip_size_kb": 64, 00:24:11.058 "state": "configuring", 00:24:11.058 "raid_level": "concat", 00:24:11.058 "superblock": false, 00:24:11.058 "num_base_bdevs": 4, 00:24:11.058 "num_base_bdevs_discovered": 3, 00:24:11.058 "num_base_bdevs_operational": 4, 00:24:11.058 "base_bdevs_list": [ 00:24:11.058 { 00:24:11.058 "name": "BaseBdev1", 00:24:11.058 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:11.058 "is_configured": true, 00:24:11.058 "data_offset": 0, 00:24:11.058 "data_size": 65536 00:24:11.058 }, 00:24:11.058 { 00:24:11.058 "name": null, 00:24:11.058 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:11.058 "is_configured": false, 00:24:11.058 "data_offset": 0, 00:24:11.058 "data_size": 65536 00:24:11.058 }, 00:24:11.058 { 00:24:11.058 "name": "BaseBdev3", 00:24:11.058 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:11.058 "is_configured": true, 00:24:11.058 "data_offset": 0, 00:24:11.058 "data_size": 65536 00:24:11.058 }, 00:24:11.058 { 00:24:11.058 "name": "BaseBdev4", 00:24:11.058 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:11.058 "is_configured": true, 00:24:11.058 "data_offset": 0, 00:24:11.058 "data_size": 65536 00:24:11.058 } 00:24:11.058 ] 00:24:11.058 }' 00:24:11.058 11:35:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:11.058 11:35:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:11.625 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.625 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:11.883 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:11.883 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:11.883 [2024-07-13 11:35:46.626314] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.142 "name": "Existed_Raid", 00:24:12.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.142 "strip_size_kb": 64, 00:24:12.142 "state": "configuring", 00:24:12.142 "raid_level": "concat", 00:24:12.142 "superblock": false, 00:24:12.142 "num_base_bdevs": 4, 00:24:12.142 "num_base_bdevs_discovered": 2, 00:24:12.142 "num_base_bdevs_operational": 4, 00:24:12.142 "base_bdevs_list": [ 00:24:12.142 { 00:24:12.142 "name": "BaseBdev1", 00:24:12.142 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:12.142 "is_configured": true, 00:24:12.142 "data_offset": 0, 00:24:12.142 "data_size": 65536 00:24:12.142 }, 00:24:12.142 { 00:24:12.142 "name": null, 00:24:12.142 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:12.142 "is_configured": false, 00:24:12.142 "data_offset": 0, 00:24:12.142 "data_size": 65536 00:24:12.142 }, 00:24:12.142 { 00:24:12.142 "name": null, 00:24:12.142 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:12.142 "is_configured": false, 00:24:12.142 "data_offset": 0, 00:24:12.142 "data_size": 65536 00:24:12.142 }, 00:24:12.142 { 00:24:12.142 "name": "BaseBdev4", 00:24:12.142 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:12.142 "is_configured": true, 00:24:12.142 "data_offset": 0, 00:24:12.142 "data_size": 65536 00:24:12.142 } 00:24:12.142 ] 00:24:12.142 }' 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.142 11:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:12.710 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.710 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:12.968 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:12.968 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:13.227 [2024-07-13 11:35:47.902592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.227 11:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.485 11:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:13.485 "name": "Existed_Raid", 00:24:13.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.485 "strip_size_kb": 64, 00:24:13.485 "state": "configuring", 00:24:13.485 "raid_level": "concat", 00:24:13.485 "superblock": false, 00:24:13.485 "num_base_bdevs": 4, 00:24:13.485 "num_base_bdevs_discovered": 3, 00:24:13.485 "num_base_bdevs_operational": 4, 00:24:13.485 "base_bdevs_list": [ 00:24:13.485 { 00:24:13.485 "name": "BaseBdev1", 00:24:13.485 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:13.485 "is_configured": true, 00:24:13.485 "data_offset": 0, 00:24:13.485 "data_size": 65536 00:24:13.485 }, 00:24:13.485 { 00:24:13.485 "name": null, 00:24:13.485 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:13.485 "is_configured": false, 00:24:13.485 "data_offset": 0, 00:24:13.485 "data_size": 65536 00:24:13.485 }, 00:24:13.485 { 00:24:13.485 "name": "BaseBdev3", 00:24:13.485 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:13.485 "is_configured": true, 00:24:13.485 "data_offset": 0, 00:24:13.485 "data_size": 65536 00:24:13.485 }, 00:24:13.485 { 00:24:13.485 "name": "BaseBdev4", 00:24:13.485 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:13.485 "is_configured": true, 00:24:13.485 "data_offset": 0, 00:24:13.485 "data_size": 65536 00:24:13.485 } 00:24:13.485 ] 00:24:13.485 }' 00:24:13.485 11:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:13.485 11:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.421 11:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.421 11:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:14.421 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:14.421 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:14.679 [2024-07-13 11:35:49.274836] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:14.679 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:14.679 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:14.679 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:14.679 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:14.679 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:14.679 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:14.679 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.679 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.680 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.680 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.680 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.680 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.938 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.938 "name": "Existed_Raid", 00:24:14.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.938 "strip_size_kb": 64, 00:24:14.938 "state": "configuring", 00:24:14.938 "raid_level": "concat", 00:24:14.938 "superblock": false, 00:24:14.938 "num_base_bdevs": 4, 00:24:14.938 "num_base_bdevs_discovered": 2, 00:24:14.938 "num_base_bdevs_operational": 4, 00:24:14.938 "base_bdevs_list": [ 00:24:14.938 { 00:24:14.938 "name": null, 00:24:14.938 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:14.938 "is_configured": false, 00:24:14.938 "data_offset": 0, 00:24:14.938 "data_size": 65536 00:24:14.938 }, 00:24:14.938 { 00:24:14.938 "name": null, 00:24:14.938 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:14.938 "is_configured": false, 00:24:14.938 "data_offset": 0, 00:24:14.938 "data_size": 65536 00:24:14.938 }, 00:24:14.938 { 00:24:14.938 "name": "BaseBdev3", 00:24:14.938 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:14.938 "is_configured": true, 00:24:14.938 "data_offset": 0, 00:24:14.938 "data_size": 65536 00:24:14.938 }, 00:24:14.938 { 00:24:14.938 "name": "BaseBdev4", 00:24:14.938 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:14.938 "is_configured": true, 00:24:14.938 "data_offset": 0, 00:24:14.938 "data_size": 65536 00:24:14.938 } 00:24:14.938 ] 00:24:14.938 }' 00:24:14.938 11:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.938 11:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:15.505 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.505 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:15.764 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:15.764 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:16.022 [2024-07-13 11:35:50.661598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.022 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.023 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.280 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.280 "name": "Existed_Raid", 00:24:16.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.280 "strip_size_kb": 64, 00:24:16.280 "state": "configuring", 00:24:16.280 "raid_level": "concat", 00:24:16.280 "superblock": false, 00:24:16.280 "num_base_bdevs": 4, 00:24:16.280 "num_base_bdevs_discovered": 3, 00:24:16.280 "num_base_bdevs_operational": 4, 00:24:16.280 "base_bdevs_list": [ 00:24:16.280 { 00:24:16.280 "name": null, 00:24:16.280 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:16.280 "is_configured": false, 00:24:16.280 "data_offset": 0, 00:24:16.280 "data_size": 65536 00:24:16.280 }, 00:24:16.280 { 00:24:16.280 "name": "BaseBdev2", 00:24:16.280 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:16.280 "is_configured": true, 00:24:16.280 "data_offset": 0, 00:24:16.280 "data_size": 65536 00:24:16.280 }, 00:24:16.280 { 00:24:16.280 "name": "BaseBdev3", 00:24:16.280 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:16.280 "is_configured": true, 00:24:16.280 "data_offset": 0, 00:24:16.280 "data_size": 65536 00:24:16.280 }, 00:24:16.280 { 00:24:16.280 "name": "BaseBdev4", 00:24:16.280 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:16.280 "is_configured": true, 00:24:16.280 "data_offset": 0, 00:24:16.280 "data_size": 65536 00:24:16.280 } 00:24:16.280 ] 00:24:16.280 }' 00:24:16.280 11:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.280 11:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.844 11:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.844 11:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:17.101 11:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:17.101 11:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.101 11:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:17.359 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 948d7c22-f7f3-43a3-961c-c4bb7111213c 00:24:17.617 [2024-07-13 11:35:52.309446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:17.617 [2024-07-13 11:35:52.309617] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:17.617 [2024-07-13 11:35:52.309653] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:17.618 [2024-07-13 11:35:52.309867] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:17.618 [2024-07-13 11:35:52.310280] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:17.618 [2024-07-13 11:35:52.310402] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:24:17.618 [2024-07-13 11:35:52.310686] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.618 NewBaseBdev 00:24:17.618 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:17.618 11:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:17.618 11:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:17.618 11:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:17.618 11:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:17.618 11:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:17.618 11:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:17.876 11:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:18.135 [ 00:24:18.135 { 00:24:18.135 "name": "NewBaseBdev", 00:24:18.135 "aliases": [ 00:24:18.135 "948d7c22-f7f3-43a3-961c-c4bb7111213c" 00:24:18.135 ], 00:24:18.135 "product_name": "Malloc disk", 00:24:18.135 "block_size": 512, 00:24:18.135 "num_blocks": 65536, 00:24:18.135 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:18.135 "assigned_rate_limits": { 00:24:18.135 "rw_ios_per_sec": 0, 00:24:18.135 "rw_mbytes_per_sec": 0, 00:24:18.135 "r_mbytes_per_sec": 0, 00:24:18.135 "w_mbytes_per_sec": 0 00:24:18.135 }, 00:24:18.135 "claimed": true, 00:24:18.135 "claim_type": "exclusive_write", 00:24:18.135 "zoned": false, 00:24:18.135 "supported_io_types": { 00:24:18.135 "read": true, 00:24:18.135 "write": true, 00:24:18.135 "unmap": true, 00:24:18.135 "flush": true, 00:24:18.135 "reset": true, 00:24:18.135 "nvme_admin": false, 00:24:18.135 "nvme_io": false, 00:24:18.135 "nvme_io_md": false, 00:24:18.135 "write_zeroes": true, 00:24:18.135 "zcopy": true, 00:24:18.135 "get_zone_info": false, 00:24:18.135 "zone_management": false, 00:24:18.135 "zone_append": false, 00:24:18.135 "compare": false, 00:24:18.135 "compare_and_write": false, 00:24:18.135 "abort": true, 00:24:18.135 "seek_hole": false, 00:24:18.135 "seek_data": false, 00:24:18.135 "copy": true, 00:24:18.135 "nvme_iov_md": false 00:24:18.135 }, 00:24:18.135 "memory_domains": [ 00:24:18.135 { 00:24:18.135 "dma_device_id": "system", 00:24:18.135 "dma_device_type": 1 00:24:18.135 }, 00:24:18.135 { 00:24:18.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.135 "dma_device_type": 2 00:24:18.135 } 00:24:18.135 ], 00:24:18.135 "driver_specific": {} 00:24:18.135 } 00:24:18.135 ] 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.135 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.394 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:18.394 "name": "Existed_Raid", 00:24:18.394 "uuid": "4a4f9766-8fe5-4726-9d53-cf025f081f7d", 00:24:18.394 "strip_size_kb": 64, 00:24:18.394 "state": "online", 00:24:18.394 "raid_level": "concat", 00:24:18.394 "superblock": false, 00:24:18.394 "num_base_bdevs": 4, 00:24:18.394 "num_base_bdevs_discovered": 4, 00:24:18.394 "num_base_bdevs_operational": 4, 00:24:18.394 "base_bdevs_list": [ 00:24:18.394 { 00:24:18.394 "name": "NewBaseBdev", 00:24:18.394 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:18.394 "is_configured": true, 00:24:18.394 "data_offset": 0, 00:24:18.394 "data_size": 65536 00:24:18.394 }, 00:24:18.394 { 00:24:18.394 "name": "BaseBdev2", 00:24:18.394 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:18.394 "is_configured": true, 00:24:18.394 "data_offset": 0, 00:24:18.394 "data_size": 65536 00:24:18.394 }, 00:24:18.394 { 00:24:18.394 "name": "BaseBdev3", 00:24:18.394 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:18.394 "is_configured": true, 00:24:18.394 "data_offset": 0, 00:24:18.394 "data_size": 65536 00:24:18.394 }, 00:24:18.394 { 00:24:18.394 "name": "BaseBdev4", 00:24:18.394 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:18.394 "is_configured": true, 00:24:18.394 "data_offset": 0, 00:24:18.394 "data_size": 65536 00:24:18.394 } 00:24:18.394 ] 00:24:18.394 }' 00:24:18.394 11:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:18.394 11:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:18.960 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:18.960 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:18.960 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:18.960 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:18.960 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:18.960 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:18.960 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:18.961 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:19.235 [2024-07-13 11:35:53.842097] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:19.235 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:19.235 "name": "Existed_Raid", 00:24:19.235 "aliases": [ 00:24:19.235 "4a4f9766-8fe5-4726-9d53-cf025f081f7d" 00:24:19.235 ], 00:24:19.235 "product_name": "Raid Volume", 00:24:19.235 "block_size": 512, 00:24:19.235 "num_blocks": 262144, 00:24:19.235 "uuid": "4a4f9766-8fe5-4726-9d53-cf025f081f7d", 00:24:19.235 "assigned_rate_limits": { 00:24:19.235 "rw_ios_per_sec": 0, 00:24:19.235 "rw_mbytes_per_sec": 0, 00:24:19.235 "r_mbytes_per_sec": 0, 00:24:19.235 "w_mbytes_per_sec": 0 00:24:19.235 }, 00:24:19.235 "claimed": false, 00:24:19.235 "zoned": false, 00:24:19.235 "supported_io_types": { 00:24:19.235 "read": true, 00:24:19.235 "write": true, 00:24:19.235 "unmap": true, 00:24:19.235 "flush": true, 00:24:19.235 "reset": true, 00:24:19.235 "nvme_admin": false, 00:24:19.235 "nvme_io": false, 00:24:19.235 "nvme_io_md": false, 00:24:19.235 "write_zeroes": true, 00:24:19.235 "zcopy": false, 00:24:19.235 "get_zone_info": false, 00:24:19.235 "zone_management": false, 00:24:19.235 "zone_append": false, 00:24:19.235 "compare": false, 00:24:19.235 "compare_and_write": false, 00:24:19.235 "abort": false, 00:24:19.235 "seek_hole": false, 00:24:19.235 "seek_data": false, 00:24:19.235 "copy": false, 00:24:19.235 "nvme_iov_md": false 00:24:19.235 }, 00:24:19.235 "memory_domains": [ 00:24:19.235 { 00:24:19.235 "dma_device_id": "system", 00:24:19.235 "dma_device_type": 1 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.235 "dma_device_type": 2 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "dma_device_id": "system", 00:24:19.235 "dma_device_type": 1 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.235 "dma_device_type": 2 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "dma_device_id": "system", 00:24:19.235 "dma_device_type": 1 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.235 "dma_device_type": 2 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "dma_device_id": "system", 00:24:19.235 "dma_device_type": 1 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.235 "dma_device_type": 2 00:24:19.235 } 00:24:19.235 ], 00:24:19.235 "driver_specific": { 00:24:19.235 "raid": { 00:24:19.235 "uuid": "4a4f9766-8fe5-4726-9d53-cf025f081f7d", 00:24:19.235 "strip_size_kb": 64, 00:24:19.235 "state": "online", 00:24:19.235 "raid_level": "concat", 00:24:19.235 "superblock": false, 00:24:19.235 "num_base_bdevs": 4, 00:24:19.235 "num_base_bdevs_discovered": 4, 00:24:19.235 "num_base_bdevs_operational": 4, 00:24:19.235 "base_bdevs_list": [ 00:24:19.235 { 00:24:19.235 "name": "NewBaseBdev", 00:24:19.235 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:19.235 "is_configured": true, 00:24:19.235 "data_offset": 0, 00:24:19.235 "data_size": 65536 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "name": "BaseBdev2", 00:24:19.235 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:19.235 "is_configured": true, 00:24:19.235 "data_offset": 0, 00:24:19.235 "data_size": 65536 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "name": "BaseBdev3", 00:24:19.235 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:19.235 "is_configured": true, 00:24:19.235 "data_offset": 0, 00:24:19.235 "data_size": 65536 00:24:19.235 }, 00:24:19.235 { 00:24:19.235 "name": "BaseBdev4", 00:24:19.235 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:19.235 "is_configured": true, 00:24:19.235 "data_offset": 0, 00:24:19.235 "data_size": 65536 00:24:19.235 } 00:24:19.235 ] 00:24:19.235 } 00:24:19.235 } 00:24:19.235 }' 00:24:19.235 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:19.235 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:19.235 BaseBdev2 00:24:19.235 BaseBdev3 00:24:19.235 BaseBdev4' 00:24:19.235 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:19.235 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:19.235 11:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:19.496 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:19.496 "name": "NewBaseBdev", 00:24:19.496 "aliases": [ 00:24:19.496 "948d7c22-f7f3-43a3-961c-c4bb7111213c" 00:24:19.496 ], 00:24:19.496 "product_name": "Malloc disk", 00:24:19.496 "block_size": 512, 00:24:19.496 "num_blocks": 65536, 00:24:19.496 "uuid": "948d7c22-f7f3-43a3-961c-c4bb7111213c", 00:24:19.496 "assigned_rate_limits": { 00:24:19.496 "rw_ios_per_sec": 0, 00:24:19.496 "rw_mbytes_per_sec": 0, 00:24:19.496 "r_mbytes_per_sec": 0, 00:24:19.496 "w_mbytes_per_sec": 0 00:24:19.496 }, 00:24:19.496 "claimed": true, 00:24:19.496 "claim_type": "exclusive_write", 00:24:19.496 "zoned": false, 00:24:19.496 "supported_io_types": { 00:24:19.496 "read": true, 00:24:19.496 "write": true, 00:24:19.496 "unmap": true, 00:24:19.496 "flush": true, 00:24:19.496 "reset": true, 00:24:19.496 "nvme_admin": false, 00:24:19.496 "nvme_io": false, 00:24:19.496 "nvme_io_md": false, 00:24:19.496 "write_zeroes": true, 00:24:19.496 "zcopy": true, 00:24:19.496 "get_zone_info": false, 00:24:19.496 "zone_management": false, 00:24:19.496 "zone_append": false, 00:24:19.496 "compare": false, 00:24:19.496 "compare_and_write": false, 00:24:19.496 "abort": true, 00:24:19.496 "seek_hole": false, 00:24:19.496 "seek_data": false, 00:24:19.496 "copy": true, 00:24:19.496 "nvme_iov_md": false 00:24:19.496 }, 00:24:19.496 "memory_domains": [ 00:24:19.496 { 00:24:19.496 "dma_device_id": "system", 00:24:19.496 "dma_device_type": 1 00:24:19.496 }, 00:24:19.496 { 00:24:19.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.496 "dma_device_type": 2 00:24:19.496 } 00:24:19.496 ], 00:24:19.496 "driver_specific": {} 00:24:19.496 }' 00:24:19.496 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:19.496 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:19.755 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:19.755 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:19.755 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:19.755 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:19.755 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:19.755 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:19.755 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:19.755 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:19.755 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.013 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:20.013 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:20.013 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:20.013 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:20.013 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:20.013 "name": "BaseBdev2", 00:24:20.013 "aliases": [ 00:24:20.013 "ff247d89-ccba-4295-9dc9-e30fc2e68bea" 00:24:20.013 ], 00:24:20.013 "product_name": "Malloc disk", 00:24:20.013 "block_size": 512, 00:24:20.013 "num_blocks": 65536, 00:24:20.013 "uuid": "ff247d89-ccba-4295-9dc9-e30fc2e68bea", 00:24:20.013 "assigned_rate_limits": { 00:24:20.013 "rw_ios_per_sec": 0, 00:24:20.013 "rw_mbytes_per_sec": 0, 00:24:20.013 "r_mbytes_per_sec": 0, 00:24:20.013 "w_mbytes_per_sec": 0 00:24:20.013 }, 00:24:20.013 "claimed": true, 00:24:20.013 "claim_type": "exclusive_write", 00:24:20.013 "zoned": false, 00:24:20.013 "supported_io_types": { 00:24:20.013 "read": true, 00:24:20.013 "write": true, 00:24:20.013 "unmap": true, 00:24:20.013 "flush": true, 00:24:20.013 "reset": true, 00:24:20.013 "nvme_admin": false, 00:24:20.013 "nvme_io": false, 00:24:20.013 "nvme_io_md": false, 00:24:20.013 "write_zeroes": true, 00:24:20.013 "zcopy": true, 00:24:20.013 "get_zone_info": false, 00:24:20.013 "zone_management": false, 00:24:20.013 "zone_append": false, 00:24:20.013 "compare": false, 00:24:20.013 "compare_and_write": false, 00:24:20.013 "abort": true, 00:24:20.013 "seek_hole": false, 00:24:20.013 "seek_data": false, 00:24:20.013 "copy": true, 00:24:20.013 "nvme_iov_md": false 00:24:20.013 }, 00:24:20.013 "memory_domains": [ 00:24:20.013 { 00:24:20.013 "dma_device_id": "system", 00:24:20.013 "dma_device_type": 1 00:24:20.013 }, 00:24:20.013 { 00:24:20.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.013 "dma_device_type": 2 00:24:20.013 } 00:24:20.013 ], 00:24:20.013 "driver_specific": {} 00:24:20.013 }' 00:24:20.013 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.272 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.272 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:20.272 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.272 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.272 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:20.272 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.272 11:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.272 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:20.272 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.530 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.530 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:20.530 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:20.530 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:20.530 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:20.787 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:20.787 "name": "BaseBdev3", 00:24:20.787 "aliases": [ 00:24:20.787 "3a84784d-a759-46d4-921b-2541303a23ef" 00:24:20.787 ], 00:24:20.787 "product_name": "Malloc disk", 00:24:20.787 "block_size": 512, 00:24:20.787 "num_blocks": 65536, 00:24:20.787 "uuid": "3a84784d-a759-46d4-921b-2541303a23ef", 00:24:20.787 "assigned_rate_limits": { 00:24:20.787 "rw_ios_per_sec": 0, 00:24:20.787 "rw_mbytes_per_sec": 0, 00:24:20.787 "r_mbytes_per_sec": 0, 00:24:20.787 "w_mbytes_per_sec": 0 00:24:20.787 }, 00:24:20.787 "claimed": true, 00:24:20.787 "claim_type": "exclusive_write", 00:24:20.787 "zoned": false, 00:24:20.787 "supported_io_types": { 00:24:20.788 "read": true, 00:24:20.788 "write": true, 00:24:20.788 "unmap": true, 00:24:20.788 "flush": true, 00:24:20.788 "reset": true, 00:24:20.788 "nvme_admin": false, 00:24:20.788 "nvme_io": false, 00:24:20.788 "nvme_io_md": false, 00:24:20.788 "write_zeroes": true, 00:24:20.788 "zcopy": true, 00:24:20.788 "get_zone_info": false, 00:24:20.788 "zone_management": false, 00:24:20.788 "zone_append": false, 00:24:20.788 "compare": false, 00:24:20.788 "compare_and_write": false, 00:24:20.788 "abort": true, 00:24:20.788 "seek_hole": false, 00:24:20.788 "seek_data": false, 00:24:20.788 "copy": true, 00:24:20.788 "nvme_iov_md": false 00:24:20.788 }, 00:24:20.788 "memory_domains": [ 00:24:20.788 { 00:24:20.788 "dma_device_id": "system", 00:24:20.788 "dma_device_type": 1 00:24:20.788 }, 00:24:20.788 { 00:24:20.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.788 "dma_device_type": 2 00:24:20.788 } 00:24:20.788 ], 00:24:20.788 "driver_specific": {} 00:24:20.788 }' 00:24:20.788 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.788 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.788 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:20.788 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.788 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.788 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:20.788 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.046 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.046 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:21.046 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.046 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.046 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:21.046 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:21.046 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:21.046 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:21.304 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:21.304 "name": "BaseBdev4", 00:24:21.304 "aliases": [ 00:24:21.304 "853c9c4b-13cd-4a4d-abcf-16eaea6a4749" 00:24:21.304 ], 00:24:21.304 "product_name": "Malloc disk", 00:24:21.304 "block_size": 512, 00:24:21.304 "num_blocks": 65536, 00:24:21.304 "uuid": "853c9c4b-13cd-4a4d-abcf-16eaea6a4749", 00:24:21.304 "assigned_rate_limits": { 00:24:21.304 "rw_ios_per_sec": 0, 00:24:21.304 "rw_mbytes_per_sec": 0, 00:24:21.304 "r_mbytes_per_sec": 0, 00:24:21.304 "w_mbytes_per_sec": 0 00:24:21.304 }, 00:24:21.304 "claimed": true, 00:24:21.304 "claim_type": "exclusive_write", 00:24:21.304 "zoned": false, 00:24:21.304 "supported_io_types": { 00:24:21.304 "read": true, 00:24:21.304 "write": true, 00:24:21.304 "unmap": true, 00:24:21.304 "flush": true, 00:24:21.304 "reset": true, 00:24:21.304 "nvme_admin": false, 00:24:21.304 "nvme_io": false, 00:24:21.304 "nvme_io_md": false, 00:24:21.304 "write_zeroes": true, 00:24:21.304 "zcopy": true, 00:24:21.304 "get_zone_info": false, 00:24:21.304 "zone_management": false, 00:24:21.304 "zone_append": false, 00:24:21.304 "compare": false, 00:24:21.304 "compare_and_write": false, 00:24:21.304 "abort": true, 00:24:21.304 "seek_hole": false, 00:24:21.304 "seek_data": false, 00:24:21.304 "copy": true, 00:24:21.304 "nvme_iov_md": false 00:24:21.304 }, 00:24:21.304 "memory_domains": [ 00:24:21.304 { 00:24:21.304 "dma_device_id": "system", 00:24:21.304 "dma_device_type": 1 00:24:21.304 }, 00:24:21.304 { 00:24:21.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.304 "dma_device_type": 2 00:24:21.304 } 00:24:21.304 ], 00:24:21.304 "driver_specific": {} 00:24:21.304 }' 00:24:21.304 11:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.304 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:21.561 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:21.561 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.561 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:21.561 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:21.561 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.561 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:21.865 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:21.865 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.865 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:21.865 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:21.865 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:22.122 [2024-07-13 11:35:56.622498] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:22.122 [2024-07-13 11:35:56.622648] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:22.122 [2024-07-13 11:35:56.622826] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:22.122 [2024-07-13 11:35:56.623030] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:22.122 [2024-07-13 11:35:56.623129] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 137791 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 137791 ']' 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 137791 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 137791 00:24:22.122 killing process with pid 137791 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 137791' 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 137791 00:24:22.122 11:35:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 137791 00:24:22.122 [2024-07-13 11:35:56.655124] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:22.380 [2024-07-13 11:35:56.906011] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:23.313 ************************************ 00:24:23.313 END TEST raid_state_function_test 00:24:23.313 ************************************ 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:23.313 00:24:23.313 real 0m33.463s 00:24:23.313 user 1m3.022s 00:24:23.313 sys 0m3.675s 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:23.313 11:35:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:23.313 11:35:57 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:24:23.313 11:35:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:23.313 11:35:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:23.313 11:35:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:23.313 ************************************ 00:24:23.313 START TEST raid_state_function_test_sb 00:24:23.313 ************************************ 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=138928 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 138928' 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:23.313 Process raid pid: 138928 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 138928 /var/tmp/spdk-raid.sock 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 138928 ']' 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:23.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.313 11:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:23.313 [2024-07-13 11:35:57.973898] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:23.313 [2024-07-13 11:35:57.974323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.571 [2024-07-13 11:35:58.150702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.829 [2024-07-13 11:35:58.377988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.829 [2024-07-13 11:35:58.564933] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:24.396 11:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.396 11:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:24:24.396 11:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:24.396 [2024-07-13 11:35:59.106678] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:24.396 [2024-07-13 11:35:59.106914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:24.396 [2024-07-13 11:35:59.107049] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:24.396 [2024-07-13 11:35:59.107108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:24.396 [2024-07-13 11:35:59.107399] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:24.396 [2024-07-13 11:35:59.107452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:24.396 [2024-07-13 11:35:59.107666] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:24.396 [2024-07-13 11:35:59.107724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:24.396 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:24.654 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:24.654 "name": "Existed_Raid", 00:24:24.654 "uuid": "fe9c24cd-2883-4339-a091-8de97061b500", 00:24:24.654 "strip_size_kb": 64, 00:24:24.654 "state": "configuring", 00:24:24.654 "raid_level": "concat", 00:24:24.654 "superblock": true, 00:24:24.654 "num_base_bdevs": 4, 00:24:24.654 "num_base_bdevs_discovered": 0, 00:24:24.654 "num_base_bdevs_operational": 4, 00:24:24.654 "base_bdevs_list": [ 00:24:24.654 { 00:24:24.654 "name": "BaseBdev1", 00:24:24.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.654 "is_configured": false, 00:24:24.654 "data_offset": 0, 00:24:24.654 "data_size": 0 00:24:24.654 }, 00:24:24.654 { 00:24:24.654 "name": "BaseBdev2", 00:24:24.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.655 "is_configured": false, 00:24:24.655 "data_offset": 0, 00:24:24.655 "data_size": 0 00:24:24.655 }, 00:24:24.655 { 00:24:24.655 "name": "BaseBdev3", 00:24:24.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.655 "is_configured": false, 00:24:24.655 "data_offset": 0, 00:24:24.655 "data_size": 0 00:24:24.655 }, 00:24:24.655 { 00:24:24.655 "name": "BaseBdev4", 00:24:24.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:24.655 "is_configured": false, 00:24:24.655 "data_offset": 0, 00:24:24.655 "data_size": 0 00:24:24.655 } 00:24:24.655 ] 00:24:24.655 }' 00:24:24.655 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:24.655 11:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:25.220 11:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:25.479 [2024-07-13 11:36:00.138688] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:25.479 [2024-07-13 11:36:00.138836] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:25.479 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:25.737 [2024-07-13 11:36:00.334751] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:25.737 [2024-07-13 11:36:00.334936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:25.737 [2024-07-13 11:36:00.335055] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:25.737 [2024-07-13 11:36:00.335135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:25.737 [2024-07-13 11:36:00.335262] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:25.737 [2024-07-13 11:36:00.335327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:25.737 [2024-07-13 11:36:00.335355] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:25.737 [2024-07-13 11:36:00.335497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:25.737 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:25.996 [2024-07-13 11:36:00.555779] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:25.996 BaseBdev1 00:24:25.996 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:25.996 11:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:25.996 11:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:25.996 11:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:25.996 11:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:25.996 11:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:25.996 11:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:26.255 [ 00:24:26.255 { 00:24:26.255 "name": "BaseBdev1", 00:24:26.255 "aliases": [ 00:24:26.255 "057de2e4-7f82-4df9-a7fc-0d296723150a" 00:24:26.255 ], 00:24:26.255 "product_name": "Malloc disk", 00:24:26.255 "block_size": 512, 00:24:26.255 "num_blocks": 65536, 00:24:26.255 "uuid": "057de2e4-7f82-4df9-a7fc-0d296723150a", 00:24:26.255 "assigned_rate_limits": { 00:24:26.255 "rw_ios_per_sec": 0, 00:24:26.255 "rw_mbytes_per_sec": 0, 00:24:26.255 "r_mbytes_per_sec": 0, 00:24:26.255 "w_mbytes_per_sec": 0 00:24:26.255 }, 00:24:26.255 "claimed": true, 00:24:26.255 "claim_type": "exclusive_write", 00:24:26.255 "zoned": false, 00:24:26.255 "supported_io_types": { 00:24:26.255 "read": true, 00:24:26.255 "write": true, 00:24:26.255 "unmap": true, 00:24:26.255 "flush": true, 00:24:26.255 "reset": true, 00:24:26.255 "nvme_admin": false, 00:24:26.255 "nvme_io": false, 00:24:26.255 "nvme_io_md": false, 00:24:26.255 "write_zeroes": true, 00:24:26.255 "zcopy": true, 00:24:26.255 "get_zone_info": false, 00:24:26.255 "zone_management": false, 00:24:26.255 "zone_append": false, 00:24:26.255 "compare": false, 00:24:26.255 "compare_and_write": false, 00:24:26.255 "abort": true, 00:24:26.255 "seek_hole": false, 00:24:26.255 "seek_data": false, 00:24:26.255 "copy": true, 00:24:26.255 "nvme_iov_md": false 00:24:26.255 }, 00:24:26.255 "memory_domains": [ 00:24:26.255 { 00:24:26.255 "dma_device_id": "system", 00:24:26.255 "dma_device_type": 1 00:24:26.255 }, 00:24:26.255 { 00:24:26.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.255 "dma_device_type": 2 00:24:26.255 } 00:24:26.255 ], 00:24:26.255 "driver_specific": {} 00:24:26.255 } 00:24:26.255 ] 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.255 11:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.514 11:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.514 "name": "Existed_Raid", 00:24:26.514 "uuid": "9f329240-334c-41ce-a53b-aaf197b95a33", 00:24:26.514 "strip_size_kb": 64, 00:24:26.514 "state": "configuring", 00:24:26.514 "raid_level": "concat", 00:24:26.514 "superblock": true, 00:24:26.514 "num_base_bdevs": 4, 00:24:26.514 "num_base_bdevs_discovered": 1, 00:24:26.514 "num_base_bdevs_operational": 4, 00:24:26.514 "base_bdevs_list": [ 00:24:26.514 { 00:24:26.514 "name": "BaseBdev1", 00:24:26.514 "uuid": "057de2e4-7f82-4df9-a7fc-0d296723150a", 00:24:26.514 "is_configured": true, 00:24:26.514 "data_offset": 2048, 00:24:26.514 "data_size": 63488 00:24:26.514 }, 00:24:26.514 { 00:24:26.514 "name": "BaseBdev2", 00:24:26.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.514 "is_configured": false, 00:24:26.514 "data_offset": 0, 00:24:26.514 "data_size": 0 00:24:26.514 }, 00:24:26.514 { 00:24:26.514 "name": "BaseBdev3", 00:24:26.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.514 "is_configured": false, 00:24:26.514 "data_offset": 0, 00:24:26.514 "data_size": 0 00:24:26.514 }, 00:24:26.514 { 00:24:26.514 "name": "BaseBdev4", 00:24:26.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.514 "is_configured": false, 00:24:26.514 "data_offset": 0, 00:24:26.514 "data_size": 0 00:24:26.514 } 00:24:26.514 ] 00:24:26.514 }' 00:24:26.514 11:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.514 11:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.449 11:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:27.449 [2024-07-13 11:36:02.100079] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:27.449 [2024-07-13 11:36:02.100261] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:24:27.449 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:27.707 [2024-07-13 11:36:02.292160] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.707 [2024-07-13 11:36:02.293843] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:27.707 [2024-07-13 11:36:02.294003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:27.707 [2024-07-13 11:36:02.294095] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:27.708 [2024-07-13 11:36:02.294152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:27.708 [2024-07-13 11:36:02.294246] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:27.708 [2024-07-13 11:36:02.294305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.708 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.966 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.966 "name": "Existed_Raid", 00:24:27.966 "uuid": "bcd63ee0-bd4e-422f-b6e6-25f582ca9610", 00:24:27.966 "strip_size_kb": 64, 00:24:27.966 "state": "configuring", 00:24:27.966 "raid_level": "concat", 00:24:27.966 "superblock": true, 00:24:27.966 "num_base_bdevs": 4, 00:24:27.966 "num_base_bdevs_discovered": 1, 00:24:27.966 "num_base_bdevs_operational": 4, 00:24:27.966 "base_bdevs_list": [ 00:24:27.966 { 00:24:27.966 "name": "BaseBdev1", 00:24:27.966 "uuid": "057de2e4-7f82-4df9-a7fc-0d296723150a", 00:24:27.966 "is_configured": true, 00:24:27.966 "data_offset": 2048, 00:24:27.966 "data_size": 63488 00:24:27.966 }, 00:24:27.966 { 00:24:27.966 "name": "BaseBdev2", 00:24:27.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.966 "is_configured": false, 00:24:27.966 "data_offset": 0, 00:24:27.966 "data_size": 0 00:24:27.966 }, 00:24:27.966 { 00:24:27.966 "name": "BaseBdev3", 00:24:27.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.966 "is_configured": false, 00:24:27.966 "data_offset": 0, 00:24:27.966 "data_size": 0 00:24:27.966 }, 00:24:27.966 { 00:24:27.966 "name": "BaseBdev4", 00:24:27.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.966 "is_configured": false, 00:24:27.966 "data_offset": 0, 00:24:27.966 "data_size": 0 00:24:27.966 } 00:24:27.966 ] 00:24:27.966 }' 00:24:27.966 11:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.966 11:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.532 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:28.788 [2024-07-13 11:36:03.485913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:28.788 BaseBdev2 00:24:28.788 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:28.788 11:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:28.788 11:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:28.788 11:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:28.788 11:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:28.788 11:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:28.788 11:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:29.045 11:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:29.303 [ 00:24:29.303 { 00:24:29.303 "name": "BaseBdev2", 00:24:29.303 "aliases": [ 00:24:29.303 "804025e9-2cf8-4f83-83eb-93b74d10b3cc" 00:24:29.303 ], 00:24:29.303 "product_name": "Malloc disk", 00:24:29.303 "block_size": 512, 00:24:29.303 "num_blocks": 65536, 00:24:29.303 "uuid": "804025e9-2cf8-4f83-83eb-93b74d10b3cc", 00:24:29.303 "assigned_rate_limits": { 00:24:29.303 "rw_ios_per_sec": 0, 00:24:29.303 "rw_mbytes_per_sec": 0, 00:24:29.303 "r_mbytes_per_sec": 0, 00:24:29.303 "w_mbytes_per_sec": 0 00:24:29.303 }, 00:24:29.303 "claimed": true, 00:24:29.303 "claim_type": "exclusive_write", 00:24:29.303 "zoned": false, 00:24:29.303 "supported_io_types": { 00:24:29.303 "read": true, 00:24:29.303 "write": true, 00:24:29.303 "unmap": true, 00:24:29.303 "flush": true, 00:24:29.303 "reset": true, 00:24:29.303 "nvme_admin": false, 00:24:29.303 "nvme_io": false, 00:24:29.303 "nvme_io_md": false, 00:24:29.303 "write_zeroes": true, 00:24:29.303 "zcopy": true, 00:24:29.303 "get_zone_info": false, 00:24:29.303 "zone_management": false, 00:24:29.303 "zone_append": false, 00:24:29.303 "compare": false, 00:24:29.303 "compare_and_write": false, 00:24:29.303 "abort": true, 00:24:29.303 "seek_hole": false, 00:24:29.303 "seek_data": false, 00:24:29.303 "copy": true, 00:24:29.303 "nvme_iov_md": false 00:24:29.303 }, 00:24:29.303 "memory_domains": [ 00:24:29.303 { 00:24:29.303 "dma_device_id": "system", 00:24:29.303 "dma_device_type": 1 00:24:29.303 }, 00:24:29.303 { 00:24:29.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.303 "dma_device_type": 2 00:24:29.303 } 00:24:29.303 ], 00:24:29.303 "driver_specific": {} 00:24:29.303 } 00:24:29.303 ] 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.303 11:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.562 11:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:29.562 "name": "Existed_Raid", 00:24:29.562 "uuid": "bcd63ee0-bd4e-422f-b6e6-25f582ca9610", 00:24:29.562 "strip_size_kb": 64, 00:24:29.562 "state": "configuring", 00:24:29.562 "raid_level": "concat", 00:24:29.562 "superblock": true, 00:24:29.562 "num_base_bdevs": 4, 00:24:29.562 "num_base_bdevs_discovered": 2, 00:24:29.562 "num_base_bdevs_operational": 4, 00:24:29.562 "base_bdevs_list": [ 00:24:29.562 { 00:24:29.562 "name": "BaseBdev1", 00:24:29.562 "uuid": "057de2e4-7f82-4df9-a7fc-0d296723150a", 00:24:29.562 "is_configured": true, 00:24:29.562 "data_offset": 2048, 00:24:29.562 "data_size": 63488 00:24:29.562 }, 00:24:29.562 { 00:24:29.562 "name": "BaseBdev2", 00:24:29.562 "uuid": "804025e9-2cf8-4f83-83eb-93b74d10b3cc", 00:24:29.562 "is_configured": true, 00:24:29.562 "data_offset": 2048, 00:24:29.562 "data_size": 63488 00:24:29.562 }, 00:24:29.562 { 00:24:29.562 "name": "BaseBdev3", 00:24:29.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.562 "is_configured": false, 00:24:29.562 "data_offset": 0, 00:24:29.562 "data_size": 0 00:24:29.562 }, 00:24:29.562 { 00:24:29.562 "name": "BaseBdev4", 00:24:29.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:29.562 "is_configured": false, 00:24:29.562 "data_offset": 0, 00:24:29.562 "data_size": 0 00:24:29.562 } 00:24:29.562 ] 00:24:29.562 }' 00:24:29.562 11:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:29.562 11:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.498 11:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:30.498 [2024-07-13 11:36:05.145745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:30.498 BaseBdev3 00:24:30.498 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:30.498 11:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:30.498 11:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:30.498 11:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:30.498 11:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:30.498 11:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:30.498 11:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:30.757 11:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:31.015 [ 00:24:31.015 { 00:24:31.015 "name": "BaseBdev3", 00:24:31.015 "aliases": [ 00:24:31.015 "870d7961-e405-49a3-b6d3-6f69a10148a6" 00:24:31.015 ], 00:24:31.015 "product_name": "Malloc disk", 00:24:31.015 "block_size": 512, 00:24:31.015 "num_blocks": 65536, 00:24:31.015 "uuid": "870d7961-e405-49a3-b6d3-6f69a10148a6", 00:24:31.015 "assigned_rate_limits": { 00:24:31.015 "rw_ios_per_sec": 0, 00:24:31.015 "rw_mbytes_per_sec": 0, 00:24:31.015 "r_mbytes_per_sec": 0, 00:24:31.015 "w_mbytes_per_sec": 0 00:24:31.015 }, 00:24:31.015 "claimed": true, 00:24:31.015 "claim_type": "exclusive_write", 00:24:31.015 "zoned": false, 00:24:31.015 "supported_io_types": { 00:24:31.015 "read": true, 00:24:31.015 "write": true, 00:24:31.015 "unmap": true, 00:24:31.015 "flush": true, 00:24:31.015 "reset": true, 00:24:31.015 "nvme_admin": false, 00:24:31.015 "nvme_io": false, 00:24:31.015 "nvme_io_md": false, 00:24:31.015 "write_zeroes": true, 00:24:31.015 "zcopy": true, 00:24:31.015 "get_zone_info": false, 00:24:31.015 "zone_management": false, 00:24:31.015 "zone_append": false, 00:24:31.015 "compare": false, 00:24:31.015 "compare_and_write": false, 00:24:31.015 "abort": true, 00:24:31.015 "seek_hole": false, 00:24:31.015 "seek_data": false, 00:24:31.015 "copy": true, 00:24:31.015 "nvme_iov_md": false 00:24:31.015 }, 00:24:31.015 "memory_domains": [ 00:24:31.015 { 00:24:31.015 "dma_device_id": "system", 00:24:31.015 "dma_device_type": 1 00:24:31.015 }, 00:24:31.015 { 00:24:31.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.015 "dma_device_type": 2 00:24:31.015 } 00:24:31.015 ], 00:24:31.015 "driver_specific": {} 00:24:31.015 } 00:24:31.015 ] 00:24:31.015 11:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:31.015 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:31.015 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.016 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.274 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.274 "name": "Existed_Raid", 00:24:31.274 "uuid": "bcd63ee0-bd4e-422f-b6e6-25f582ca9610", 00:24:31.274 "strip_size_kb": 64, 00:24:31.274 "state": "configuring", 00:24:31.274 "raid_level": "concat", 00:24:31.274 "superblock": true, 00:24:31.274 "num_base_bdevs": 4, 00:24:31.274 "num_base_bdevs_discovered": 3, 00:24:31.274 "num_base_bdevs_operational": 4, 00:24:31.274 "base_bdevs_list": [ 00:24:31.274 { 00:24:31.274 "name": "BaseBdev1", 00:24:31.274 "uuid": "057de2e4-7f82-4df9-a7fc-0d296723150a", 00:24:31.274 "is_configured": true, 00:24:31.274 "data_offset": 2048, 00:24:31.274 "data_size": 63488 00:24:31.274 }, 00:24:31.274 { 00:24:31.274 "name": "BaseBdev2", 00:24:31.274 "uuid": "804025e9-2cf8-4f83-83eb-93b74d10b3cc", 00:24:31.274 "is_configured": true, 00:24:31.274 "data_offset": 2048, 00:24:31.274 "data_size": 63488 00:24:31.274 }, 00:24:31.274 { 00:24:31.274 "name": "BaseBdev3", 00:24:31.274 "uuid": "870d7961-e405-49a3-b6d3-6f69a10148a6", 00:24:31.274 "is_configured": true, 00:24:31.274 "data_offset": 2048, 00:24:31.274 "data_size": 63488 00:24:31.274 }, 00:24:31.274 { 00:24:31.274 "name": "BaseBdev4", 00:24:31.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.274 "is_configured": false, 00:24:31.274 "data_offset": 0, 00:24:31.274 "data_size": 0 00:24:31.274 } 00:24:31.274 ] 00:24:31.274 }' 00:24:31.274 11:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.274 11:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.841 11:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:32.109 [2024-07-13 11:36:06.737265] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:32.109 [2024-07-13 11:36:06.737647] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:24:32.109 BaseBdev4 00:24:32.109 [2024-07-13 11:36:06.738110] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:32.109 [2024-07-13 11:36:06.738308] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:24:32.109 [2024-07-13 11:36:06.738737] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:24:32.109 [2024-07-13 11:36:06.742999] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:24:32.109 [2024-07-13 11:36:06.743686] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:32.109 11:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:32.109 11:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:32.109 11:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:32.109 11:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:32.109 11:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:32.109 11:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:32.109 11:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:32.375 11:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:32.634 [ 00:24:32.634 { 00:24:32.634 "name": "BaseBdev4", 00:24:32.634 "aliases": [ 00:24:32.634 "744c8b1a-a342-4b4a-9d73-bd1df272844b" 00:24:32.634 ], 00:24:32.634 "product_name": "Malloc disk", 00:24:32.634 "block_size": 512, 00:24:32.634 "num_blocks": 65536, 00:24:32.634 "uuid": "744c8b1a-a342-4b4a-9d73-bd1df272844b", 00:24:32.634 "assigned_rate_limits": { 00:24:32.634 "rw_ios_per_sec": 0, 00:24:32.634 "rw_mbytes_per_sec": 0, 00:24:32.634 "r_mbytes_per_sec": 0, 00:24:32.634 "w_mbytes_per_sec": 0 00:24:32.634 }, 00:24:32.634 "claimed": true, 00:24:32.634 "claim_type": "exclusive_write", 00:24:32.634 "zoned": false, 00:24:32.634 "supported_io_types": { 00:24:32.634 "read": true, 00:24:32.634 "write": true, 00:24:32.634 "unmap": true, 00:24:32.634 "flush": true, 00:24:32.634 "reset": true, 00:24:32.634 "nvme_admin": false, 00:24:32.634 "nvme_io": false, 00:24:32.634 "nvme_io_md": false, 00:24:32.634 "write_zeroes": true, 00:24:32.634 "zcopy": true, 00:24:32.634 "get_zone_info": false, 00:24:32.634 "zone_management": false, 00:24:32.634 "zone_append": false, 00:24:32.634 "compare": false, 00:24:32.634 "compare_and_write": false, 00:24:32.634 "abort": true, 00:24:32.634 "seek_hole": false, 00:24:32.634 "seek_data": false, 00:24:32.634 "copy": true, 00:24:32.634 "nvme_iov_md": false 00:24:32.634 }, 00:24:32.634 "memory_domains": [ 00:24:32.634 { 00:24:32.634 "dma_device_id": "system", 00:24:32.634 "dma_device_type": 1 00:24:32.634 }, 00:24:32.634 { 00:24:32.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:32.634 "dma_device_type": 2 00:24:32.634 } 00:24:32.634 ], 00:24:32.634 "driver_specific": {} 00:24:32.634 } 00:24:32.634 ] 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.634 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.893 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:32.893 "name": "Existed_Raid", 00:24:32.893 "uuid": "bcd63ee0-bd4e-422f-b6e6-25f582ca9610", 00:24:32.893 "strip_size_kb": 64, 00:24:32.893 "state": "online", 00:24:32.893 "raid_level": "concat", 00:24:32.893 "superblock": true, 00:24:32.893 "num_base_bdevs": 4, 00:24:32.893 "num_base_bdevs_discovered": 4, 00:24:32.893 "num_base_bdevs_operational": 4, 00:24:32.893 "base_bdevs_list": [ 00:24:32.893 { 00:24:32.893 "name": "BaseBdev1", 00:24:32.893 "uuid": "057de2e4-7f82-4df9-a7fc-0d296723150a", 00:24:32.893 "is_configured": true, 00:24:32.893 "data_offset": 2048, 00:24:32.893 "data_size": 63488 00:24:32.893 }, 00:24:32.893 { 00:24:32.893 "name": "BaseBdev2", 00:24:32.893 "uuid": "804025e9-2cf8-4f83-83eb-93b74d10b3cc", 00:24:32.893 "is_configured": true, 00:24:32.893 "data_offset": 2048, 00:24:32.893 "data_size": 63488 00:24:32.893 }, 00:24:32.893 { 00:24:32.893 "name": "BaseBdev3", 00:24:32.893 "uuid": "870d7961-e405-49a3-b6d3-6f69a10148a6", 00:24:32.893 "is_configured": true, 00:24:32.893 "data_offset": 2048, 00:24:32.893 "data_size": 63488 00:24:32.893 }, 00:24:32.893 { 00:24:32.893 "name": "BaseBdev4", 00:24:32.893 "uuid": "744c8b1a-a342-4b4a-9d73-bd1df272844b", 00:24:32.893 "is_configured": true, 00:24:32.893 "data_offset": 2048, 00:24:32.893 "data_size": 63488 00:24:32.893 } 00:24:32.893 ] 00:24:32.893 }' 00:24:32.893 11:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:32.893 11:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.461 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:33.461 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:33.461 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:33.461 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:33.461 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:33.461 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:33.461 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:33.461 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:33.720 [2024-07-13 11:36:08.271923] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:33.720 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:33.720 "name": "Existed_Raid", 00:24:33.720 "aliases": [ 00:24:33.720 "bcd63ee0-bd4e-422f-b6e6-25f582ca9610" 00:24:33.720 ], 00:24:33.720 "product_name": "Raid Volume", 00:24:33.720 "block_size": 512, 00:24:33.720 "num_blocks": 253952, 00:24:33.720 "uuid": "bcd63ee0-bd4e-422f-b6e6-25f582ca9610", 00:24:33.720 "assigned_rate_limits": { 00:24:33.720 "rw_ios_per_sec": 0, 00:24:33.720 "rw_mbytes_per_sec": 0, 00:24:33.720 "r_mbytes_per_sec": 0, 00:24:33.720 "w_mbytes_per_sec": 0 00:24:33.720 }, 00:24:33.720 "claimed": false, 00:24:33.720 "zoned": false, 00:24:33.720 "supported_io_types": { 00:24:33.720 "read": true, 00:24:33.720 "write": true, 00:24:33.720 "unmap": true, 00:24:33.720 "flush": true, 00:24:33.720 "reset": true, 00:24:33.720 "nvme_admin": false, 00:24:33.720 "nvme_io": false, 00:24:33.720 "nvme_io_md": false, 00:24:33.720 "write_zeroes": true, 00:24:33.720 "zcopy": false, 00:24:33.720 "get_zone_info": false, 00:24:33.720 "zone_management": false, 00:24:33.720 "zone_append": false, 00:24:33.720 "compare": false, 00:24:33.720 "compare_and_write": false, 00:24:33.720 "abort": false, 00:24:33.720 "seek_hole": false, 00:24:33.720 "seek_data": false, 00:24:33.720 "copy": false, 00:24:33.720 "nvme_iov_md": false 00:24:33.720 }, 00:24:33.720 "memory_domains": [ 00:24:33.720 { 00:24:33.720 "dma_device_id": "system", 00:24:33.720 "dma_device_type": 1 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.720 "dma_device_type": 2 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "dma_device_id": "system", 00:24:33.720 "dma_device_type": 1 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.720 "dma_device_type": 2 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "dma_device_id": "system", 00:24:33.720 "dma_device_type": 1 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.720 "dma_device_type": 2 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "dma_device_id": "system", 00:24:33.720 "dma_device_type": 1 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.720 "dma_device_type": 2 00:24:33.720 } 00:24:33.720 ], 00:24:33.720 "driver_specific": { 00:24:33.720 "raid": { 00:24:33.720 "uuid": "bcd63ee0-bd4e-422f-b6e6-25f582ca9610", 00:24:33.720 "strip_size_kb": 64, 00:24:33.720 "state": "online", 00:24:33.720 "raid_level": "concat", 00:24:33.720 "superblock": true, 00:24:33.720 "num_base_bdevs": 4, 00:24:33.720 "num_base_bdevs_discovered": 4, 00:24:33.720 "num_base_bdevs_operational": 4, 00:24:33.720 "base_bdevs_list": [ 00:24:33.720 { 00:24:33.720 "name": "BaseBdev1", 00:24:33.720 "uuid": "057de2e4-7f82-4df9-a7fc-0d296723150a", 00:24:33.720 "is_configured": true, 00:24:33.720 "data_offset": 2048, 00:24:33.720 "data_size": 63488 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "name": "BaseBdev2", 00:24:33.720 "uuid": "804025e9-2cf8-4f83-83eb-93b74d10b3cc", 00:24:33.720 "is_configured": true, 00:24:33.720 "data_offset": 2048, 00:24:33.720 "data_size": 63488 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "name": "BaseBdev3", 00:24:33.720 "uuid": "870d7961-e405-49a3-b6d3-6f69a10148a6", 00:24:33.720 "is_configured": true, 00:24:33.720 "data_offset": 2048, 00:24:33.720 "data_size": 63488 00:24:33.720 }, 00:24:33.720 { 00:24:33.720 "name": "BaseBdev4", 00:24:33.720 "uuid": "744c8b1a-a342-4b4a-9d73-bd1df272844b", 00:24:33.720 "is_configured": true, 00:24:33.720 "data_offset": 2048, 00:24:33.720 "data_size": 63488 00:24:33.720 } 00:24:33.720 ] 00:24:33.720 } 00:24:33.720 } 00:24:33.720 }' 00:24:33.720 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:33.720 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:33.720 BaseBdev2 00:24:33.720 BaseBdev3 00:24:33.720 BaseBdev4' 00:24:33.720 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:33.720 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:33.720 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:33.979 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:33.979 "name": "BaseBdev1", 00:24:33.979 "aliases": [ 00:24:33.979 "057de2e4-7f82-4df9-a7fc-0d296723150a" 00:24:33.979 ], 00:24:33.979 "product_name": "Malloc disk", 00:24:33.979 "block_size": 512, 00:24:33.979 "num_blocks": 65536, 00:24:33.979 "uuid": "057de2e4-7f82-4df9-a7fc-0d296723150a", 00:24:33.979 "assigned_rate_limits": { 00:24:33.979 "rw_ios_per_sec": 0, 00:24:33.979 "rw_mbytes_per_sec": 0, 00:24:33.979 "r_mbytes_per_sec": 0, 00:24:33.979 "w_mbytes_per_sec": 0 00:24:33.979 }, 00:24:33.979 "claimed": true, 00:24:33.979 "claim_type": "exclusive_write", 00:24:33.979 "zoned": false, 00:24:33.979 "supported_io_types": { 00:24:33.979 "read": true, 00:24:33.979 "write": true, 00:24:33.979 "unmap": true, 00:24:33.979 "flush": true, 00:24:33.979 "reset": true, 00:24:33.979 "nvme_admin": false, 00:24:33.979 "nvme_io": false, 00:24:33.979 "nvme_io_md": false, 00:24:33.979 "write_zeroes": true, 00:24:33.979 "zcopy": true, 00:24:33.979 "get_zone_info": false, 00:24:33.979 "zone_management": false, 00:24:33.979 "zone_append": false, 00:24:33.979 "compare": false, 00:24:33.979 "compare_and_write": false, 00:24:33.979 "abort": true, 00:24:33.979 "seek_hole": false, 00:24:33.979 "seek_data": false, 00:24:33.979 "copy": true, 00:24:33.979 "nvme_iov_md": false 00:24:33.979 }, 00:24:33.979 "memory_domains": [ 00:24:33.979 { 00:24:33.979 "dma_device_id": "system", 00:24:33.979 "dma_device_type": 1 00:24:33.979 }, 00:24:33.979 { 00:24:33.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.979 "dma_device_type": 2 00:24:33.979 } 00:24:33.979 ], 00:24:33.979 "driver_specific": {} 00:24:33.979 }' 00:24:33.979 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:33.979 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:33.979 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:33.979 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:33.979 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.252 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:34.252 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.252 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.252 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:34.252 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.252 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:34.252 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:34.252 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:34.522 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:34.522 11:36:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:34.522 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:34.522 "name": "BaseBdev2", 00:24:34.522 "aliases": [ 00:24:34.522 "804025e9-2cf8-4f83-83eb-93b74d10b3cc" 00:24:34.522 ], 00:24:34.522 "product_name": "Malloc disk", 00:24:34.522 "block_size": 512, 00:24:34.522 "num_blocks": 65536, 00:24:34.522 "uuid": "804025e9-2cf8-4f83-83eb-93b74d10b3cc", 00:24:34.522 "assigned_rate_limits": { 00:24:34.522 "rw_ios_per_sec": 0, 00:24:34.522 "rw_mbytes_per_sec": 0, 00:24:34.522 "r_mbytes_per_sec": 0, 00:24:34.522 "w_mbytes_per_sec": 0 00:24:34.522 }, 00:24:34.522 "claimed": true, 00:24:34.522 "claim_type": "exclusive_write", 00:24:34.522 "zoned": false, 00:24:34.522 "supported_io_types": { 00:24:34.522 "read": true, 00:24:34.522 "write": true, 00:24:34.523 "unmap": true, 00:24:34.523 "flush": true, 00:24:34.523 "reset": true, 00:24:34.523 "nvme_admin": false, 00:24:34.523 "nvme_io": false, 00:24:34.523 "nvme_io_md": false, 00:24:34.523 "write_zeroes": true, 00:24:34.523 "zcopy": true, 00:24:34.523 "get_zone_info": false, 00:24:34.523 "zone_management": false, 00:24:34.523 "zone_append": false, 00:24:34.523 "compare": false, 00:24:34.523 "compare_and_write": false, 00:24:34.523 "abort": true, 00:24:34.523 "seek_hole": false, 00:24:34.523 "seek_data": false, 00:24:34.523 "copy": true, 00:24:34.523 "nvme_iov_md": false 00:24:34.523 }, 00:24:34.523 "memory_domains": [ 00:24:34.523 { 00:24:34.523 "dma_device_id": "system", 00:24:34.523 "dma_device_type": 1 00:24:34.523 }, 00:24:34.523 { 00:24:34.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.523 "dma_device_type": 2 00:24:34.523 } 00:24:34.523 ], 00:24:34.523 "driver_specific": {} 00:24:34.523 }' 00:24:34.523 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.781 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:34.781 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:34.781 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.781 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:34.781 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:34.781 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:34.781 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:35.040 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:35.040 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:35.040 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:35.040 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:35.040 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:35.040 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:35.040 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:35.299 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:35.299 "name": "BaseBdev3", 00:24:35.299 "aliases": [ 00:24:35.299 "870d7961-e405-49a3-b6d3-6f69a10148a6" 00:24:35.299 ], 00:24:35.299 "product_name": "Malloc disk", 00:24:35.299 "block_size": 512, 00:24:35.299 "num_blocks": 65536, 00:24:35.299 "uuid": "870d7961-e405-49a3-b6d3-6f69a10148a6", 00:24:35.299 "assigned_rate_limits": { 00:24:35.299 "rw_ios_per_sec": 0, 00:24:35.299 "rw_mbytes_per_sec": 0, 00:24:35.299 "r_mbytes_per_sec": 0, 00:24:35.299 "w_mbytes_per_sec": 0 00:24:35.299 }, 00:24:35.299 "claimed": true, 00:24:35.299 "claim_type": "exclusive_write", 00:24:35.299 "zoned": false, 00:24:35.299 "supported_io_types": { 00:24:35.299 "read": true, 00:24:35.299 "write": true, 00:24:35.299 "unmap": true, 00:24:35.299 "flush": true, 00:24:35.299 "reset": true, 00:24:35.299 "nvme_admin": false, 00:24:35.299 "nvme_io": false, 00:24:35.299 "nvme_io_md": false, 00:24:35.299 "write_zeroes": true, 00:24:35.299 "zcopy": true, 00:24:35.299 "get_zone_info": false, 00:24:35.299 "zone_management": false, 00:24:35.299 "zone_append": false, 00:24:35.299 "compare": false, 00:24:35.299 "compare_and_write": false, 00:24:35.299 "abort": true, 00:24:35.299 "seek_hole": false, 00:24:35.299 "seek_data": false, 00:24:35.299 "copy": true, 00:24:35.299 "nvme_iov_md": false 00:24:35.299 }, 00:24:35.299 "memory_domains": [ 00:24:35.299 { 00:24:35.299 "dma_device_id": "system", 00:24:35.299 "dma_device_type": 1 00:24:35.299 }, 00:24:35.299 { 00:24:35.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.299 "dma_device_type": 2 00:24:35.299 } 00:24:35.299 ], 00:24:35.299 "driver_specific": {} 00:24:35.299 }' 00:24:35.299 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:35.299 11:36:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:35.299 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:35.299 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:35.557 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:35.557 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:35.557 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:35.557 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:35.557 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:35.557 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:35.557 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:35.816 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:35.816 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:35.816 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:35.816 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:36.074 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:36.074 "name": "BaseBdev4", 00:24:36.074 "aliases": [ 00:24:36.074 "744c8b1a-a342-4b4a-9d73-bd1df272844b" 00:24:36.074 ], 00:24:36.074 "product_name": "Malloc disk", 00:24:36.074 "block_size": 512, 00:24:36.074 "num_blocks": 65536, 00:24:36.074 "uuid": "744c8b1a-a342-4b4a-9d73-bd1df272844b", 00:24:36.074 "assigned_rate_limits": { 00:24:36.074 "rw_ios_per_sec": 0, 00:24:36.074 "rw_mbytes_per_sec": 0, 00:24:36.074 "r_mbytes_per_sec": 0, 00:24:36.074 "w_mbytes_per_sec": 0 00:24:36.074 }, 00:24:36.074 "claimed": true, 00:24:36.074 "claim_type": "exclusive_write", 00:24:36.074 "zoned": false, 00:24:36.074 "supported_io_types": { 00:24:36.074 "read": true, 00:24:36.074 "write": true, 00:24:36.074 "unmap": true, 00:24:36.074 "flush": true, 00:24:36.074 "reset": true, 00:24:36.074 "nvme_admin": false, 00:24:36.074 "nvme_io": false, 00:24:36.074 "nvme_io_md": false, 00:24:36.074 "write_zeroes": true, 00:24:36.074 "zcopy": true, 00:24:36.074 "get_zone_info": false, 00:24:36.074 "zone_management": false, 00:24:36.075 "zone_append": false, 00:24:36.075 "compare": false, 00:24:36.075 "compare_and_write": false, 00:24:36.075 "abort": true, 00:24:36.075 "seek_hole": false, 00:24:36.075 "seek_data": false, 00:24:36.075 "copy": true, 00:24:36.075 "nvme_iov_md": false 00:24:36.075 }, 00:24:36.075 "memory_domains": [ 00:24:36.075 { 00:24:36.075 "dma_device_id": "system", 00:24:36.075 "dma_device_type": 1 00:24:36.075 }, 00:24:36.075 { 00:24:36.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.075 "dma_device_type": 2 00:24:36.075 } 00:24:36.075 ], 00:24:36.075 "driver_specific": {} 00:24:36.075 }' 00:24:36.075 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.075 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.075 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:36.075 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.075 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.334 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:36.334 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:36.334 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:36.334 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:36.334 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:36.334 11:36:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:36.334 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:36.334 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:36.592 [2024-07-13 11:36:11.276308] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:36.592 [2024-07-13 11:36:11.276457] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:36.592 [2024-07-13 11:36:11.276597] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.851 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:37.109 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.109 "name": "Existed_Raid", 00:24:37.109 "uuid": "bcd63ee0-bd4e-422f-b6e6-25f582ca9610", 00:24:37.109 "strip_size_kb": 64, 00:24:37.109 "state": "offline", 00:24:37.109 "raid_level": "concat", 00:24:37.109 "superblock": true, 00:24:37.109 "num_base_bdevs": 4, 00:24:37.109 "num_base_bdevs_discovered": 3, 00:24:37.109 "num_base_bdevs_operational": 3, 00:24:37.109 "base_bdevs_list": [ 00:24:37.109 { 00:24:37.109 "name": null, 00:24:37.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:37.109 "is_configured": false, 00:24:37.109 "data_offset": 2048, 00:24:37.109 "data_size": 63488 00:24:37.109 }, 00:24:37.109 { 00:24:37.109 "name": "BaseBdev2", 00:24:37.109 "uuid": "804025e9-2cf8-4f83-83eb-93b74d10b3cc", 00:24:37.109 "is_configured": true, 00:24:37.109 "data_offset": 2048, 00:24:37.109 "data_size": 63488 00:24:37.109 }, 00:24:37.109 { 00:24:37.109 "name": "BaseBdev3", 00:24:37.109 "uuid": "870d7961-e405-49a3-b6d3-6f69a10148a6", 00:24:37.110 "is_configured": true, 00:24:37.110 "data_offset": 2048, 00:24:37.110 "data_size": 63488 00:24:37.110 }, 00:24:37.110 { 00:24:37.110 "name": "BaseBdev4", 00:24:37.110 "uuid": "744c8b1a-a342-4b4a-9d73-bd1df272844b", 00:24:37.110 "is_configured": true, 00:24:37.110 "data_offset": 2048, 00:24:37.110 "data_size": 63488 00:24:37.110 } 00:24:37.110 ] 00:24:37.110 }' 00:24:37.110 11:36:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.110 11:36:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.676 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:37.676 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:37.676 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.676 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:37.676 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:37.676 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:37.676 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:37.934 [2024-07-13 11:36:12.583296] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:37.934 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:37.934 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:37.934 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.934 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:38.500 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:38.500 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:38.500 11:36:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:38.500 [2024-07-13 11:36:13.185264] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:38.758 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:38.758 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:38.758 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.758 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:38.758 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:38.758 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:38.758 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:39.015 [2024-07-13 11:36:13.699251] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:39.015 [2024-07-13 11:36:13.699317] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:24:39.274 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:39.274 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:39.274 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.274 11:36:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:39.274 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:39.274 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:39.274 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:39.274 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:39.274 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:39.274 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:39.840 BaseBdev2 00:24:39.840 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:39.840 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:39.840 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:39.840 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:39.840 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:39.840 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:39.840 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:39.840 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:40.097 [ 00:24:40.097 { 00:24:40.097 "name": "BaseBdev2", 00:24:40.097 "aliases": [ 00:24:40.097 "909d3441-fb36-45e6-a1b2-f4c11be8dec1" 00:24:40.097 ], 00:24:40.097 "product_name": "Malloc disk", 00:24:40.097 "block_size": 512, 00:24:40.097 "num_blocks": 65536, 00:24:40.097 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:40.097 "assigned_rate_limits": { 00:24:40.097 "rw_ios_per_sec": 0, 00:24:40.097 "rw_mbytes_per_sec": 0, 00:24:40.097 "r_mbytes_per_sec": 0, 00:24:40.097 "w_mbytes_per_sec": 0 00:24:40.097 }, 00:24:40.097 "claimed": false, 00:24:40.097 "zoned": false, 00:24:40.097 "supported_io_types": { 00:24:40.097 "read": true, 00:24:40.097 "write": true, 00:24:40.097 "unmap": true, 00:24:40.097 "flush": true, 00:24:40.097 "reset": true, 00:24:40.097 "nvme_admin": false, 00:24:40.097 "nvme_io": false, 00:24:40.097 "nvme_io_md": false, 00:24:40.097 "write_zeroes": true, 00:24:40.097 "zcopy": true, 00:24:40.097 "get_zone_info": false, 00:24:40.097 "zone_management": false, 00:24:40.097 "zone_append": false, 00:24:40.097 "compare": false, 00:24:40.097 "compare_and_write": false, 00:24:40.097 "abort": true, 00:24:40.097 "seek_hole": false, 00:24:40.097 "seek_data": false, 00:24:40.097 "copy": true, 00:24:40.097 "nvme_iov_md": false 00:24:40.097 }, 00:24:40.098 "memory_domains": [ 00:24:40.098 { 00:24:40.098 "dma_device_id": "system", 00:24:40.098 "dma_device_type": 1 00:24:40.098 }, 00:24:40.098 { 00:24:40.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.098 "dma_device_type": 2 00:24:40.098 } 00:24:40.098 ], 00:24:40.098 "driver_specific": {} 00:24:40.098 } 00:24:40.098 ] 00:24:40.098 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:40.098 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:40.098 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:40.098 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:40.355 BaseBdev3 00:24:40.355 11:36:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:40.355 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:40.355 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:40.355 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:40.355 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:40.355 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:40.355 11:36:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:40.614 11:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:40.614 [ 00:24:40.614 { 00:24:40.614 "name": "BaseBdev3", 00:24:40.614 "aliases": [ 00:24:40.614 "c609cdcd-3fbd-4292-8bf6-c86ade00f818" 00:24:40.614 ], 00:24:40.614 "product_name": "Malloc disk", 00:24:40.614 "block_size": 512, 00:24:40.614 "num_blocks": 65536, 00:24:40.614 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:40.614 "assigned_rate_limits": { 00:24:40.614 "rw_ios_per_sec": 0, 00:24:40.614 "rw_mbytes_per_sec": 0, 00:24:40.614 "r_mbytes_per_sec": 0, 00:24:40.614 "w_mbytes_per_sec": 0 00:24:40.614 }, 00:24:40.614 "claimed": false, 00:24:40.614 "zoned": false, 00:24:40.614 "supported_io_types": { 00:24:40.614 "read": true, 00:24:40.614 "write": true, 00:24:40.614 "unmap": true, 00:24:40.614 "flush": true, 00:24:40.614 "reset": true, 00:24:40.614 "nvme_admin": false, 00:24:40.614 "nvme_io": false, 00:24:40.614 "nvme_io_md": false, 00:24:40.614 "write_zeroes": true, 00:24:40.614 "zcopy": true, 00:24:40.614 "get_zone_info": false, 00:24:40.614 "zone_management": false, 00:24:40.614 "zone_append": false, 00:24:40.614 "compare": false, 00:24:40.614 "compare_and_write": false, 00:24:40.614 "abort": true, 00:24:40.614 "seek_hole": false, 00:24:40.614 "seek_data": false, 00:24:40.614 "copy": true, 00:24:40.614 "nvme_iov_md": false 00:24:40.614 }, 00:24:40.614 "memory_domains": [ 00:24:40.614 { 00:24:40.614 "dma_device_id": "system", 00:24:40.614 "dma_device_type": 1 00:24:40.614 }, 00:24:40.614 { 00:24:40.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.614 "dma_device_type": 2 00:24:40.614 } 00:24:40.614 ], 00:24:40.614 "driver_specific": {} 00:24:40.614 } 00:24:40.614 ] 00:24:40.614 11:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:40.614 11:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:40.614 11:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:40.614 11:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:40.872 BaseBdev4 00:24:40.873 11:36:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:40.873 11:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:40.873 11:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:40.873 11:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:40.873 11:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:40.873 11:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:40.873 11:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:41.131 11:36:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:41.389 [ 00:24:41.389 { 00:24:41.389 "name": "BaseBdev4", 00:24:41.389 "aliases": [ 00:24:41.389 "2bdeee6f-4323-4af8-ac62-595ac3c44ec9" 00:24:41.389 ], 00:24:41.389 "product_name": "Malloc disk", 00:24:41.390 "block_size": 512, 00:24:41.390 "num_blocks": 65536, 00:24:41.390 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:41.390 "assigned_rate_limits": { 00:24:41.390 "rw_ios_per_sec": 0, 00:24:41.390 "rw_mbytes_per_sec": 0, 00:24:41.390 "r_mbytes_per_sec": 0, 00:24:41.390 "w_mbytes_per_sec": 0 00:24:41.390 }, 00:24:41.390 "claimed": false, 00:24:41.390 "zoned": false, 00:24:41.390 "supported_io_types": { 00:24:41.390 "read": true, 00:24:41.390 "write": true, 00:24:41.390 "unmap": true, 00:24:41.390 "flush": true, 00:24:41.390 "reset": true, 00:24:41.390 "nvme_admin": false, 00:24:41.390 "nvme_io": false, 00:24:41.390 "nvme_io_md": false, 00:24:41.390 "write_zeroes": true, 00:24:41.390 "zcopy": true, 00:24:41.390 "get_zone_info": false, 00:24:41.390 "zone_management": false, 00:24:41.390 "zone_append": false, 00:24:41.390 "compare": false, 00:24:41.390 "compare_and_write": false, 00:24:41.390 "abort": true, 00:24:41.390 "seek_hole": false, 00:24:41.390 "seek_data": false, 00:24:41.390 "copy": true, 00:24:41.390 "nvme_iov_md": false 00:24:41.390 }, 00:24:41.390 "memory_domains": [ 00:24:41.390 { 00:24:41.390 "dma_device_id": "system", 00:24:41.390 "dma_device_type": 1 00:24:41.390 }, 00:24:41.390 { 00:24:41.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:41.390 "dma_device_type": 2 00:24:41.390 } 00:24:41.390 ], 00:24:41.390 "driver_specific": {} 00:24:41.390 } 00:24:41.390 ] 00:24:41.390 11:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:41.390 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:41.390 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:41.390 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:41.648 [2024-07-13 11:36:16.185387] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:41.648 [2024-07-13 11:36:16.185451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:41.648 [2024-07-13 11:36:16.185472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:41.648 [2024-07-13 11:36:16.187165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:41.648 [2024-07-13 11:36:16.187222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:41.648 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:41.648 "name": "Existed_Raid", 00:24:41.648 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:41.648 "strip_size_kb": 64, 00:24:41.648 "state": "configuring", 00:24:41.648 "raid_level": "concat", 00:24:41.648 "superblock": true, 00:24:41.648 "num_base_bdevs": 4, 00:24:41.648 "num_base_bdevs_discovered": 3, 00:24:41.648 "num_base_bdevs_operational": 4, 00:24:41.648 "base_bdevs_list": [ 00:24:41.648 { 00:24:41.648 "name": "BaseBdev1", 00:24:41.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:41.649 "is_configured": false, 00:24:41.649 "data_offset": 0, 00:24:41.649 "data_size": 0 00:24:41.649 }, 00:24:41.649 { 00:24:41.649 "name": "BaseBdev2", 00:24:41.649 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:41.649 "is_configured": true, 00:24:41.649 "data_offset": 2048, 00:24:41.649 "data_size": 63488 00:24:41.649 }, 00:24:41.649 { 00:24:41.649 "name": "BaseBdev3", 00:24:41.649 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:41.649 "is_configured": true, 00:24:41.649 "data_offset": 2048, 00:24:41.649 "data_size": 63488 00:24:41.649 }, 00:24:41.649 { 00:24:41.649 "name": "BaseBdev4", 00:24:41.649 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:41.649 "is_configured": true, 00:24:41.649 "data_offset": 2048, 00:24:41.649 "data_size": 63488 00:24:41.649 } 00:24:41.649 ] 00:24:41.649 }' 00:24:41.649 11:36:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:41.649 11:36:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:42.583 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:42.842 [2024-07-13 11:36:17.345816] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.842 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:43.100 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:43.100 "name": "Existed_Raid", 00:24:43.100 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:43.100 "strip_size_kb": 64, 00:24:43.100 "state": "configuring", 00:24:43.100 "raid_level": "concat", 00:24:43.100 "superblock": true, 00:24:43.100 "num_base_bdevs": 4, 00:24:43.100 "num_base_bdevs_discovered": 2, 00:24:43.100 "num_base_bdevs_operational": 4, 00:24:43.100 "base_bdevs_list": [ 00:24:43.100 { 00:24:43.100 "name": "BaseBdev1", 00:24:43.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:43.100 "is_configured": false, 00:24:43.100 "data_offset": 0, 00:24:43.100 "data_size": 0 00:24:43.100 }, 00:24:43.100 { 00:24:43.100 "name": null, 00:24:43.100 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:43.100 "is_configured": false, 00:24:43.100 "data_offset": 2048, 00:24:43.100 "data_size": 63488 00:24:43.100 }, 00:24:43.100 { 00:24:43.100 "name": "BaseBdev3", 00:24:43.100 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:43.100 "is_configured": true, 00:24:43.100 "data_offset": 2048, 00:24:43.100 "data_size": 63488 00:24:43.100 }, 00:24:43.100 { 00:24:43.100 "name": "BaseBdev4", 00:24:43.100 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:43.100 "is_configured": true, 00:24:43.100 "data_offset": 2048, 00:24:43.100 "data_size": 63488 00:24:43.100 } 00:24:43.100 ] 00:24:43.100 }' 00:24:43.100 11:36:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:43.100 11:36:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.666 11:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:43.666 11:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:43.925 11:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:43.925 11:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:43.925 [2024-07-13 11:36:18.629091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:43.925 BaseBdev1 00:24:43.925 11:36:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:43.925 11:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:43.925 11:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:43.925 11:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:43.925 11:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:43.925 11:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:43.925 11:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:44.183 11:36:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:44.441 [ 00:24:44.441 { 00:24:44.441 "name": "BaseBdev1", 00:24:44.441 "aliases": [ 00:24:44.441 "f878a83b-693c-4859-baf3-db07c561e9b1" 00:24:44.441 ], 00:24:44.441 "product_name": "Malloc disk", 00:24:44.441 "block_size": 512, 00:24:44.441 "num_blocks": 65536, 00:24:44.441 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:44.441 "assigned_rate_limits": { 00:24:44.441 "rw_ios_per_sec": 0, 00:24:44.441 "rw_mbytes_per_sec": 0, 00:24:44.441 "r_mbytes_per_sec": 0, 00:24:44.441 "w_mbytes_per_sec": 0 00:24:44.441 }, 00:24:44.441 "claimed": true, 00:24:44.441 "claim_type": "exclusive_write", 00:24:44.441 "zoned": false, 00:24:44.441 "supported_io_types": { 00:24:44.441 "read": true, 00:24:44.441 "write": true, 00:24:44.441 "unmap": true, 00:24:44.441 "flush": true, 00:24:44.441 "reset": true, 00:24:44.441 "nvme_admin": false, 00:24:44.441 "nvme_io": false, 00:24:44.441 "nvme_io_md": false, 00:24:44.441 "write_zeroes": true, 00:24:44.441 "zcopy": true, 00:24:44.441 "get_zone_info": false, 00:24:44.441 "zone_management": false, 00:24:44.441 "zone_append": false, 00:24:44.441 "compare": false, 00:24:44.441 "compare_and_write": false, 00:24:44.441 "abort": true, 00:24:44.441 "seek_hole": false, 00:24:44.441 "seek_data": false, 00:24:44.441 "copy": true, 00:24:44.441 "nvme_iov_md": false 00:24:44.441 }, 00:24:44.441 "memory_domains": [ 00:24:44.441 { 00:24:44.441 "dma_device_id": "system", 00:24:44.441 "dma_device_type": 1 00:24:44.441 }, 00:24:44.441 { 00:24:44.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.441 "dma_device_type": 2 00:24:44.441 } 00:24:44.441 ], 00:24:44.441 "driver_specific": {} 00:24:44.441 } 00:24:44.441 ] 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.441 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.699 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:44.699 "name": "Existed_Raid", 00:24:44.699 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:44.699 "strip_size_kb": 64, 00:24:44.699 "state": "configuring", 00:24:44.699 "raid_level": "concat", 00:24:44.699 "superblock": true, 00:24:44.699 "num_base_bdevs": 4, 00:24:44.699 "num_base_bdevs_discovered": 3, 00:24:44.699 "num_base_bdevs_operational": 4, 00:24:44.699 "base_bdevs_list": [ 00:24:44.699 { 00:24:44.699 "name": "BaseBdev1", 00:24:44.699 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:44.699 "is_configured": true, 00:24:44.699 "data_offset": 2048, 00:24:44.699 "data_size": 63488 00:24:44.699 }, 00:24:44.699 { 00:24:44.699 "name": null, 00:24:44.699 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:44.699 "is_configured": false, 00:24:44.699 "data_offset": 2048, 00:24:44.699 "data_size": 63488 00:24:44.699 }, 00:24:44.699 { 00:24:44.699 "name": "BaseBdev3", 00:24:44.699 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:44.699 "is_configured": true, 00:24:44.699 "data_offset": 2048, 00:24:44.699 "data_size": 63488 00:24:44.699 }, 00:24:44.699 { 00:24:44.699 "name": "BaseBdev4", 00:24:44.699 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:44.699 "is_configured": true, 00:24:44.699 "data_offset": 2048, 00:24:44.699 "data_size": 63488 00:24:44.699 } 00:24:44.699 ] 00:24:44.699 }' 00:24:44.699 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:44.699 11:36:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:45.264 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.264 11:36:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:45.521 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:45.521 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:45.778 [2024-07-13 11:36:20.353433] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.778 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:46.036 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:46.036 "name": "Existed_Raid", 00:24:46.036 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:46.036 "strip_size_kb": 64, 00:24:46.036 "state": "configuring", 00:24:46.036 "raid_level": "concat", 00:24:46.036 "superblock": true, 00:24:46.036 "num_base_bdevs": 4, 00:24:46.036 "num_base_bdevs_discovered": 2, 00:24:46.036 "num_base_bdevs_operational": 4, 00:24:46.036 "base_bdevs_list": [ 00:24:46.036 { 00:24:46.036 "name": "BaseBdev1", 00:24:46.036 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:46.036 "is_configured": true, 00:24:46.036 "data_offset": 2048, 00:24:46.036 "data_size": 63488 00:24:46.036 }, 00:24:46.036 { 00:24:46.036 "name": null, 00:24:46.036 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:46.036 "is_configured": false, 00:24:46.036 "data_offset": 2048, 00:24:46.036 "data_size": 63488 00:24:46.036 }, 00:24:46.036 { 00:24:46.036 "name": null, 00:24:46.036 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:46.036 "is_configured": false, 00:24:46.036 "data_offset": 2048, 00:24:46.036 "data_size": 63488 00:24:46.036 }, 00:24:46.036 { 00:24:46.036 "name": "BaseBdev4", 00:24:46.036 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:46.036 "is_configured": true, 00:24:46.036 "data_offset": 2048, 00:24:46.036 "data_size": 63488 00:24:46.036 } 00:24:46.036 ] 00:24:46.036 }' 00:24:46.036 11:36:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:46.036 11:36:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:46.602 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.602 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:46.860 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:46.860 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:47.118 [2024-07-13 11:36:21.721710] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.118 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.377 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:47.377 "name": "Existed_Raid", 00:24:47.377 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:47.377 "strip_size_kb": 64, 00:24:47.377 "state": "configuring", 00:24:47.377 "raid_level": "concat", 00:24:47.377 "superblock": true, 00:24:47.377 "num_base_bdevs": 4, 00:24:47.377 "num_base_bdevs_discovered": 3, 00:24:47.377 "num_base_bdevs_operational": 4, 00:24:47.377 "base_bdevs_list": [ 00:24:47.377 { 00:24:47.377 "name": "BaseBdev1", 00:24:47.377 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:47.377 "is_configured": true, 00:24:47.377 "data_offset": 2048, 00:24:47.377 "data_size": 63488 00:24:47.377 }, 00:24:47.377 { 00:24:47.377 "name": null, 00:24:47.377 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:47.377 "is_configured": false, 00:24:47.377 "data_offset": 2048, 00:24:47.377 "data_size": 63488 00:24:47.377 }, 00:24:47.377 { 00:24:47.377 "name": "BaseBdev3", 00:24:47.377 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:47.377 "is_configured": true, 00:24:47.377 "data_offset": 2048, 00:24:47.377 "data_size": 63488 00:24:47.377 }, 00:24:47.377 { 00:24:47.377 "name": "BaseBdev4", 00:24:47.377 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:47.377 "is_configured": true, 00:24:47.377 "data_offset": 2048, 00:24:47.377 "data_size": 63488 00:24:47.377 } 00:24:47.377 ] 00:24:47.377 }' 00:24:47.377 11:36:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:47.377 11:36:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:47.944 11:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.944 11:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:48.511 11:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:48.511 11:36:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:48.511 [2024-07-13 11:36:23.175434] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.511 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:49.077 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:49.077 "name": "Existed_Raid", 00:24:49.077 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:49.077 "strip_size_kb": 64, 00:24:49.077 "state": "configuring", 00:24:49.077 "raid_level": "concat", 00:24:49.077 "superblock": true, 00:24:49.077 "num_base_bdevs": 4, 00:24:49.077 "num_base_bdevs_discovered": 2, 00:24:49.077 "num_base_bdevs_operational": 4, 00:24:49.077 "base_bdevs_list": [ 00:24:49.077 { 00:24:49.077 "name": null, 00:24:49.077 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:49.077 "is_configured": false, 00:24:49.077 "data_offset": 2048, 00:24:49.077 "data_size": 63488 00:24:49.077 }, 00:24:49.077 { 00:24:49.077 "name": null, 00:24:49.077 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:49.077 "is_configured": false, 00:24:49.077 "data_offset": 2048, 00:24:49.077 "data_size": 63488 00:24:49.077 }, 00:24:49.077 { 00:24:49.077 "name": "BaseBdev3", 00:24:49.077 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:49.077 "is_configured": true, 00:24:49.077 "data_offset": 2048, 00:24:49.077 "data_size": 63488 00:24:49.077 }, 00:24:49.077 { 00:24:49.078 "name": "BaseBdev4", 00:24:49.078 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:49.078 "is_configured": true, 00:24:49.078 "data_offset": 2048, 00:24:49.078 "data_size": 63488 00:24:49.078 } 00:24:49.078 ] 00:24:49.078 }' 00:24:49.078 11:36:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:49.078 11:36:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:49.644 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.644 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:49.902 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:49.902 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:50.160 [2024-07-13 11:36:24.710638] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.160 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.419 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:50.419 "name": "Existed_Raid", 00:24:50.419 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:50.419 "strip_size_kb": 64, 00:24:50.419 "state": "configuring", 00:24:50.419 "raid_level": "concat", 00:24:50.419 "superblock": true, 00:24:50.419 "num_base_bdevs": 4, 00:24:50.419 "num_base_bdevs_discovered": 3, 00:24:50.419 "num_base_bdevs_operational": 4, 00:24:50.419 "base_bdevs_list": [ 00:24:50.419 { 00:24:50.419 "name": null, 00:24:50.419 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:50.419 "is_configured": false, 00:24:50.419 "data_offset": 2048, 00:24:50.419 "data_size": 63488 00:24:50.419 }, 00:24:50.419 { 00:24:50.419 "name": "BaseBdev2", 00:24:50.419 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:50.419 "is_configured": true, 00:24:50.419 "data_offset": 2048, 00:24:50.419 "data_size": 63488 00:24:50.419 }, 00:24:50.419 { 00:24:50.419 "name": "BaseBdev3", 00:24:50.419 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:50.419 "is_configured": true, 00:24:50.419 "data_offset": 2048, 00:24:50.419 "data_size": 63488 00:24:50.419 }, 00:24:50.419 { 00:24:50.419 "name": "BaseBdev4", 00:24:50.419 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:50.419 "is_configured": true, 00:24:50.419 "data_offset": 2048, 00:24:50.419 "data_size": 63488 00:24:50.419 } 00:24:50.419 ] 00:24:50.419 }' 00:24:50.419 11:36:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:50.419 11:36:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:50.985 11:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.985 11:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:51.244 11:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:51.244 11:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.244 11:36:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:51.501 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f878a83b-693c-4859-baf3-db07c561e9b1 00:24:51.760 [2024-07-13 11:36:26.274460] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:51.760 [2024-07-13 11:36:26.274696] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:51.760 [2024-07-13 11:36:26.274711] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:51.760 NewBaseBdev 00:24:51.760 [2024-07-13 11:36:26.275121] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:24:51.760 [2024-07-13 11:36:26.275645] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:51.760 [2024-07-13 11:36:26.275669] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:24:51.760 [2024-07-13 11:36:26.275800] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.760 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:51.760 11:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:51.760 11:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:51.760 11:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:51.760 11:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:51.760 11:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:51.760 11:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:52.019 [ 00:24:52.019 { 00:24:52.019 "name": "NewBaseBdev", 00:24:52.019 "aliases": [ 00:24:52.019 "f878a83b-693c-4859-baf3-db07c561e9b1" 00:24:52.019 ], 00:24:52.019 "product_name": "Malloc disk", 00:24:52.019 "block_size": 512, 00:24:52.019 "num_blocks": 65536, 00:24:52.019 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:52.019 "assigned_rate_limits": { 00:24:52.019 "rw_ios_per_sec": 0, 00:24:52.019 "rw_mbytes_per_sec": 0, 00:24:52.019 "r_mbytes_per_sec": 0, 00:24:52.019 "w_mbytes_per_sec": 0 00:24:52.019 }, 00:24:52.019 "claimed": true, 00:24:52.019 "claim_type": "exclusive_write", 00:24:52.019 "zoned": false, 00:24:52.019 "supported_io_types": { 00:24:52.019 "read": true, 00:24:52.019 "write": true, 00:24:52.019 "unmap": true, 00:24:52.019 "flush": true, 00:24:52.019 "reset": true, 00:24:52.019 "nvme_admin": false, 00:24:52.019 "nvme_io": false, 00:24:52.019 "nvme_io_md": false, 00:24:52.019 "write_zeroes": true, 00:24:52.019 "zcopy": true, 00:24:52.019 "get_zone_info": false, 00:24:52.019 "zone_management": false, 00:24:52.019 "zone_append": false, 00:24:52.019 "compare": false, 00:24:52.019 "compare_and_write": false, 00:24:52.019 "abort": true, 00:24:52.019 "seek_hole": false, 00:24:52.019 "seek_data": false, 00:24:52.019 "copy": true, 00:24:52.019 "nvme_iov_md": false 00:24:52.019 }, 00:24:52.019 "memory_domains": [ 00:24:52.019 { 00:24:52.019 "dma_device_id": "system", 00:24:52.019 "dma_device_type": 1 00:24:52.019 }, 00:24:52.019 { 00:24:52.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.019 "dma_device_type": 2 00:24:52.019 } 00:24:52.019 ], 00:24:52.019 "driver_specific": {} 00:24:52.019 } 00:24:52.019 ] 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.019 11:36:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:52.277 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:52.277 "name": "Existed_Raid", 00:24:52.277 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:52.277 "strip_size_kb": 64, 00:24:52.277 "state": "online", 00:24:52.277 "raid_level": "concat", 00:24:52.277 "superblock": true, 00:24:52.277 "num_base_bdevs": 4, 00:24:52.277 "num_base_bdevs_discovered": 4, 00:24:52.277 "num_base_bdevs_operational": 4, 00:24:52.277 "base_bdevs_list": [ 00:24:52.277 { 00:24:52.277 "name": "NewBaseBdev", 00:24:52.277 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:52.277 "is_configured": true, 00:24:52.277 "data_offset": 2048, 00:24:52.277 "data_size": 63488 00:24:52.277 }, 00:24:52.277 { 00:24:52.277 "name": "BaseBdev2", 00:24:52.277 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:52.277 "is_configured": true, 00:24:52.277 "data_offset": 2048, 00:24:52.277 "data_size": 63488 00:24:52.277 }, 00:24:52.277 { 00:24:52.277 "name": "BaseBdev3", 00:24:52.277 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:52.277 "is_configured": true, 00:24:52.277 "data_offset": 2048, 00:24:52.277 "data_size": 63488 00:24:52.277 }, 00:24:52.277 { 00:24:52.277 "name": "BaseBdev4", 00:24:52.277 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:52.277 "is_configured": true, 00:24:52.277 "data_offset": 2048, 00:24:52.277 "data_size": 63488 00:24:52.277 } 00:24:52.277 ] 00:24:52.277 }' 00:24:52.277 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:52.277 11:36:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:53.211 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:53.211 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:53.211 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:53.211 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:53.211 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:53.211 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:53.211 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:53.211 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:53.211 [2024-07-13 11:36:27.815162] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:53.211 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:53.211 "name": "Existed_Raid", 00:24:53.211 "aliases": [ 00:24:53.211 "64bf8882-0041-4605-ae13-7cbb20ef718e" 00:24:53.211 ], 00:24:53.211 "product_name": "Raid Volume", 00:24:53.211 "block_size": 512, 00:24:53.211 "num_blocks": 253952, 00:24:53.211 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:53.211 "assigned_rate_limits": { 00:24:53.211 "rw_ios_per_sec": 0, 00:24:53.211 "rw_mbytes_per_sec": 0, 00:24:53.211 "r_mbytes_per_sec": 0, 00:24:53.211 "w_mbytes_per_sec": 0 00:24:53.211 }, 00:24:53.211 "claimed": false, 00:24:53.211 "zoned": false, 00:24:53.211 "supported_io_types": { 00:24:53.211 "read": true, 00:24:53.211 "write": true, 00:24:53.211 "unmap": true, 00:24:53.211 "flush": true, 00:24:53.211 "reset": true, 00:24:53.211 "nvme_admin": false, 00:24:53.211 "nvme_io": false, 00:24:53.211 "nvme_io_md": false, 00:24:53.211 "write_zeroes": true, 00:24:53.211 "zcopy": false, 00:24:53.211 "get_zone_info": false, 00:24:53.211 "zone_management": false, 00:24:53.211 "zone_append": false, 00:24:53.211 "compare": false, 00:24:53.211 "compare_and_write": false, 00:24:53.211 "abort": false, 00:24:53.211 "seek_hole": false, 00:24:53.211 "seek_data": false, 00:24:53.211 "copy": false, 00:24:53.211 "nvme_iov_md": false 00:24:53.211 }, 00:24:53.211 "memory_domains": [ 00:24:53.211 { 00:24:53.211 "dma_device_id": "system", 00:24:53.211 "dma_device_type": 1 00:24:53.211 }, 00:24:53.211 { 00:24:53.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.211 "dma_device_type": 2 00:24:53.211 }, 00:24:53.211 { 00:24:53.211 "dma_device_id": "system", 00:24:53.211 "dma_device_type": 1 00:24:53.211 }, 00:24:53.211 { 00:24:53.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.211 "dma_device_type": 2 00:24:53.211 }, 00:24:53.211 { 00:24:53.211 "dma_device_id": "system", 00:24:53.211 "dma_device_type": 1 00:24:53.211 }, 00:24:53.211 { 00:24:53.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.211 "dma_device_type": 2 00:24:53.211 }, 00:24:53.211 { 00:24:53.211 "dma_device_id": "system", 00:24:53.211 "dma_device_type": 1 00:24:53.211 }, 00:24:53.211 { 00:24:53.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.212 "dma_device_type": 2 00:24:53.212 } 00:24:53.212 ], 00:24:53.212 "driver_specific": { 00:24:53.212 "raid": { 00:24:53.212 "uuid": "64bf8882-0041-4605-ae13-7cbb20ef718e", 00:24:53.212 "strip_size_kb": 64, 00:24:53.212 "state": "online", 00:24:53.212 "raid_level": "concat", 00:24:53.212 "superblock": true, 00:24:53.212 "num_base_bdevs": 4, 00:24:53.212 "num_base_bdevs_discovered": 4, 00:24:53.212 "num_base_bdevs_operational": 4, 00:24:53.212 "base_bdevs_list": [ 00:24:53.212 { 00:24:53.212 "name": "NewBaseBdev", 00:24:53.212 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:53.212 "is_configured": true, 00:24:53.212 "data_offset": 2048, 00:24:53.212 "data_size": 63488 00:24:53.212 }, 00:24:53.212 { 00:24:53.212 "name": "BaseBdev2", 00:24:53.212 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:53.212 "is_configured": true, 00:24:53.212 "data_offset": 2048, 00:24:53.212 "data_size": 63488 00:24:53.212 }, 00:24:53.212 { 00:24:53.212 "name": "BaseBdev3", 00:24:53.212 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:53.212 "is_configured": true, 00:24:53.212 "data_offset": 2048, 00:24:53.212 "data_size": 63488 00:24:53.212 }, 00:24:53.212 { 00:24:53.212 "name": "BaseBdev4", 00:24:53.212 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:53.212 "is_configured": true, 00:24:53.212 "data_offset": 2048, 00:24:53.212 "data_size": 63488 00:24:53.212 } 00:24:53.212 ] 00:24:53.212 } 00:24:53.212 } 00:24:53.212 }' 00:24:53.212 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:53.212 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:53.212 BaseBdev2 00:24:53.212 BaseBdev3 00:24:53.212 BaseBdev4' 00:24:53.212 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:53.212 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:53.212 11:36:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:53.470 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:53.470 "name": "NewBaseBdev", 00:24:53.470 "aliases": [ 00:24:53.470 "f878a83b-693c-4859-baf3-db07c561e9b1" 00:24:53.470 ], 00:24:53.470 "product_name": "Malloc disk", 00:24:53.470 "block_size": 512, 00:24:53.470 "num_blocks": 65536, 00:24:53.470 "uuid": "f878a83b-693c-4859-baf3-db07c561e9b1", 00:24:53.470 "assigned_rate_limits": { 00:24:53.470 "rw_ios_per_sec": 0, 00:24:53.470 "rw_mbytes_per_sec": 0, 00:24:53.470 "r_mbytes_per_sec": 0, 00:24:53.470 "w_mbytes_per_sec": 0 00:24:53.470 }, 00:24:53.470 "claimed": true, 00:24:53.470 "claim_type": "exclusive_write", 00:24:53.470 "zoned": false, 00:24:53.470 "supported_io_types": { 00:24:53.470 "read": true, 00:24:53.470 "write": true, 00:24:53.470 "unmap": true, 00:24:53.470 "flush": true, 00:24:53.470 "reset": true, 00:24:53.470 "nvme_admin": false, 00:24:53.470 "nvme_io": false, 00:24:53.470 "nvme_io_md": false, 00:24:53.470 "write_zeroes": true, 00:24:53.470 "zcopy": true, 00:24:53.470 "get_zone_info": false, 00:24:53.470 "zone_management": false, 00:24:53.470 "zone_append": false, 00:24:53.470 "compare": false, 00:24:53.470 "compare_and_write": false, 00:24:53.470 "abort": true, 00:24:53.470 "seek_hole": false, 00:24:53.471 "seek_data": false, 00:24:53.471 "copy": true, 00:24:53.471 "nvme_iov_md": false 00:24:53.471 }, 00:24:53.471 "memory_domains": [ 00:24:53.471 { 00:24:53.471 "dma_device_id": "system", 00:24:53.471 "dma_device_type": 1 00:24:53.471 }, 00:24:53.471 { 00:24:53.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.471 "dma_device_type": 2 00:24:53.471 } 00:24:53.471 ], 00:24:53.471 "driver_specific": {} 00:24:53.471 }' 00:24:53.471 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:53.471 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:53.729 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:53.729 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:53.729 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:53.729 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:53.729 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:53.729 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:53.729 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:53.729 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:53.988 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:53.988 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:53.988 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:53.988 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:53.988 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:54.247 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:54.247 "name": "BaseBdev2", 00:24:54.247 "aliases": [ 00:24:54.247 "909d3441-fb36-45e6-a1b2-f4c11be8dec1" 00:24:54.247 ], 00:24:54.247 "product_name": "Malloc disk", 00:24:54.247 "block_size": 512, 00:24:54.247 "num_blocks": 65536, 00:24:54.247 "uuid": "909d3441-fb36-45e6-a1b2-f4c11be8dec1", 00:24:54.247 "assigned_rate_limits": { 00:24:54.247 "rw_ios_per_sec": 0, 00:24:54.247 "rw_mbytes_per_sec": 0, 00:24:54.247 "r_mbytes_per_sec": 0, 00:24:54.247 "w_mbytes_per_sec": 0 00:24:54.247 }, 00:24:54.247 "claimed": true, 00:24:54.247 "claim_type": "exclusive_write", 00:24:54.247 "zoned": false, 00:24:54.247 "supported_io_types": { 00:24:54.247 "read": true, 00:24:54.247 "write": true, 00:24:54.247 "unmap": true, 00:24:54.247 "flush": true, 00:24:54.247 "reset": true, 00:24:54.247 "nvme_admin": false, 00:24:54.247 "nvme_io": false, 00:24:54.247 "nvme_io_md": false, 00:24:54.247 "write_zeroes": true, 00:24:54.247 "zcopy": true, 00:24:54.247 "get_zone_info": false, 00:24:54.247 "zone_management": false, 00:24:54.247 "zone_append": false, 00:24:54.247 "compare": false, 00:24:54.247 "compare_and_write": false, 00:24:54.247 "abort": true, 00:24:54.247 "seek_hole": false, 00:24:54.247 "seek_data": false, 00:24:54.247 "copy": true, 00:24:54.247 "nvme_iov_md": false 00:24:54.247 }, 00:24:54.247 "memory_domains": [ 00:24:54.247 { 00:24:54.247 "dma_device_id": "system", 00:24:54.247 "dma_device_type": 1 00:24:54.247 }, 00:24:54.247 { 00:24:54.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.247 "dma_device_type": 2 00:24:54.247 } 00:24:54.247 ], 00:24:54.247 "driver_specific": {} 00:24:54.247 }' 00:24:54.247 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.247 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.247 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:54.247 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.247 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.247 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:54.247 11:36:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.505 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.505 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:54.505 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:54.505 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:54.505 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:54.505 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:54.505 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:54.505 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:54.764 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:54.764 "name": "BaseBdev3", 00:24:54.764 "aliases": [ 00:24:54.764 "c609cdcd-3fbd-4292-8bf6-c86ade00f818" 00:24:54.764 ], 00:24:54.764 "product_name": "Malloc disk", 00:24:54.764 "block_size": 512, 00:24:54.764 "num_blocks": 65536, 00:24:54.764 "uuid": "c609cdcd-3fbd-4292-8bf6-c86ade00f818", 00:24:54.764 "assigned_rate_limits": { 00:24:54.764 "rw_ios_per_sec": 0, 00:24:54.764 "rw_mbytes_per_sec": 0, 00:24:54.764 "r_mbytes_per_sec": 0, 00:24:54.764 "w_mbytes_per_sec": 0 00:24:54.764 }, 00:24:54.764 "claimed": true, 00:24:54.764 "claim_type": "exclusive_write", 00:24:54.764 "zoned": false, 00:24:54.764 "supported_io_types": { 00:24:54.764 "read": true, 00:24:54.764 "write": true, 00:24:54.764 "unmap": true, 00:24:54.764 "flush": true, 00:24:54.764 "reset": true, 00:24:54.764 "nvme_admin": false, 00:24:54.764 "nvme_io": false, 00:24:54.764 "nvme_io_md": false, 00:24:54.764 "write_zeroes": true, 00:24:54.764 "zcopy": true, 00:24:54.764 "get_zone_info": false, 00:24:54.764 "zone_management": false, 00:24:54.764 "zone_append": false, 00:24:54.764 "compare": false, 00:24:54.764 "compare_and_write": false, 00:24:54.764 "abort": true, 00:24:54.764 "seek_hole": false, 00:24:54.764 "seek_data": false, 00:24:54.764 "copy": true, 00:24:54.764 "nvme_iov_md": false 00:24:54.764 }, 00:24:54.764 "memory_domains": [ 00:24:54.764 { 00:24:54.764 "dma_device_id": "system", 00:24:54.764 "dma_device_type": 1 00:24:54.764 }, 00:24:54.764 { 00:24:54.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.764 "dma_device_type": 2 00:24:54.764 } 00:24:54.764 ], 00:24:54.764 "driver_specific": {} 00:24:54.764 }' 00:24:54.764 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.764 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:55.023 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:55.023 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:55.023 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:55.023 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:55.023 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:55.023 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:55.023 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:55.023 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:55.282 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:55.282 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:55.282 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:55.282 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:55.282 11:36:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:55.540 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:55.540 "name": "BaseBdev4", 00:24:55.540 "aliases": [ 00:24:55.540 "2bdeee6f-4323-4af8-ac62-595ac3c44ec9" 00:24:55.540 ], 00:24:55.540 "product_name": "Malloc disk", 00:24:55.540 "block_size": 512, 00:24:55.540 "num_blocks": 65536, 00:24:55.540 "uuid": "2bdeee6f-4323-4af8-ac62-595ac3c44ec9", 00:24:55.540 "assigned_rate_limits": { 00:24:55.540 "rw_ios_per_sec": 0, 00:24:55.541 "rw_mbytes_per_sec": 0, 00:24:55.541 "r_mbytes_per_sec": 0, 00:24:55.541 "w_mbytes_per_sec": 0 00:24:55.541 }, 00:24:55.541 "claimed": true, 00:24:55.541 "claim_type": "exclusive_write", 00:24:55.541 "zoned": false, 00:24:55.541 "supported_io_types": { 00:24:55.541 "read": true, 00:24:55.541 "write": true, 00:24:55.541 "unmap": true, 00:24:55.541 "flush": true, 00:24:55.541 "reset": true, 00:24:55.541 "nvme_admin": false, 00:24:55.541 "nvme_io": false, 00:24:55.541 "nvme_io_md": false, 00:24:55.541 "write_zeroes": true, 00:24:55.541 "zcopy": true, 00:24:55.541 "get_zone_info": false, 00:24:55.541 "zone_management": false, 00:24:55.541 "zone_append": false, 00:24:55.541 "compare": false, 00:24:55.541 "compare_and_write": false, 00:24:55.541 "abort": true, 00:24:55.541 "seek_hole": false, 00:24:55.541 "seek_data": false, 00:24:55.541 "copy": true, 00:24:55.541 "nvme_iov_md": false 00:24:55.541 }, 00:24:55.541 "memory_domains": [ 00:24:55.541 { 00:24:55.541 "dma_device_id": "system", 00:24:55.541 "dma_device_type": 1 00:24:55.541 }, 00:24:55.541 { 00:24:55.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.541 "dma_device_type": 2 00:24:55.541 } 00:24:55.541 ], 00:24:55.541 "driver_specific": {} 00:24:55.541 }' 00:24:55.541 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:55.541 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:55.541 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:55.541 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:55.541 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:55.800 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:55.800 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:55.800 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:55.800 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:55.800 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:55.800 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:55.800 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:55.800 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:56.059 [2024-07-13 11:36:30.735793] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:56.059 [2024-07-13 11:36:30.735825] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:56.059 [2024-07-13 11:36:30.735901] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:56.059 [2024-07-13 11:36:30.735972] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:56.059 [2024-07-13 11:36:30.735984] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 138928 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 138928 ']' 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 138928 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 138928 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 138928' 00:24:56.059 killing process with pid 138928 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 138928 00:24:56.059 11:36:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 138928 00:24:56.059 [2024-07-13 11:36:30.771060] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:56.317 [2024-07-13 11:36:31.023821] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:57.253 ************************************ 00:24:57.253 END TEST raid_state_function_test_sb 00:24:57.253 ************************************ 00:24:57.253 11:36:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:57.253 00:24:57.253 real 0m34.053s 00:24:57.253 user 1m4.349s 00:24:57.253 sys 0m3.477s 00:24:57.253 11:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:57.253 11:36:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:57.253 11:36:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:57.253 11:36:31 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:24:57.253 11:36:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:57.253 11:36:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.253 11:36:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:57.512 ************************************ 00:24:57.512 START TEST raid_superblock_test 00:24:57.512 ************************************ 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=140073 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 140073 /var/tmp/spdk-raid.sock 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 140073 ']' 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:57.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.512 11:36:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:57.512 [2024-07-13 11:36:32.083266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:57.512 [2024-07-13 11:36:32.083486] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140073 ] 00:24:57.512 [2024-07-13 11:36:32.258720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.770 [2024-07-13 11:36:32.483993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.027 [2024-07-13 11:36:32.669646] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:58.592 malloc1 00:24:58.592 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:58.849 [2024-07-13 11:36:33.530622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:58.849 [2024-07-13 11:36:33.530724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.849 [2024-07-13 11:36:33.530762] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:24:58.849 [2024-07-13 11:36:33.530785] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.849 [2024-07-13 11:36:33.533046] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.849 [2024-07-13 11:36:33.533117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:58.849 pt1 00:24:58.849 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:58.849 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:58.849 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:24:58.849 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:24:58.849 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:58.849 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:58.849 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:58.849 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:58.849 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:59.106 malloc2 00:24:59.364 11:36:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:59.364 [2024-07-13 11:36:34.044864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:59.364 [2024-07-13 11:36:34.044963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.364 [2024-07-13 11:36:34.045001] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:24:59.364 [2024-07-13 11:36:34.045023] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.364 [2024-07-13 11:36:34.047220] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.364 [2024-07-13 11:36:34.047273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:59.364 pt2 00:24:59.364 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:59.364 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:59.364 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:24:59.364 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:24:59.364 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:59.365 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.365 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.365 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.365 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:59.622 malloc3 00:24:59.622 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:59.881 [2024-07-13 11:36:34.466245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:59.881 [2024-07-13 11:36:34.466343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.881 [2024-07-13 11:36:34.466380] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:24:59.881 [2024-07-13 11:36:34.466408] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.881 [2024-07-13 11:36:34.468644] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.881 [2024-07-13 11:36:34.468705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:59.881 pt3 00:24:59.881 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:59.881 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:59.881 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:24:59.881 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:24:59.881 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:59.881 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:59.881 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:59.881 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:59.881 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:00.139 malloc4 00:25:00.139 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:00.139 [2024-07-13 11:36:34.874978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:00.139 [2024-07-13 11:36:34.875071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.139 [2024-07-13 11:36:34.875108] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:25:00.139 [2024-07-13 11:36:34.875135] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.139 [2024-07-13 11:36:34.877310] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.139 [2024-07-13 11:36:34.877369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:00.139 pt4 00:25:00.397 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:00.397 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:00.397 11:36:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:00.397 [2024-07-13 11:36:35.059069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:00.397 [2024-07-13 11:36:35.060956] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:00.397 [2024-07-13 11:36:35.061041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:00.397 [2024-07-13 11:36:35.061107] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:00.397 [2024-07-13 11:36:35.061397] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:25:00.397 [2024-07-13 11:36:35.061426] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:00.398 [2024-07-13 11:36:35.061575] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:00.398 [2024-07-13 11:36:35.061946] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:25:00.398 [2024-07-13 11:36:35.061973] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:25:00.398 [2024-07-13 11:36:35.062152] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.398 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:00.657 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:00.657 "name": "raid_bdev1", 00:25:00.657 "uuid": "3be59a42-e11d-47db-82fa-47275e3f20b4", 00:25:00.657 "strip_size_kb": 64, 00:25:00.657 "state": "online", 00:25:00.657 "raid_level": "concat", 00:25:00.657 "superblock": true, 00:25:00.657 "num_base_bdevs": 4, 00:25:00.657 "num_base_bdevs_discovered": 4, 00:25:00.657 "num_base_bdevs_operational": 4, 00:25:00.657 "base_bdevs_list": [ 00:25:00.657 { 00:25:00.657 "name": "pt1", 00:25:00.657 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:00.657 "is_configured": true, 00:25:00.657 "data_offset": 2048, 00:25:00.657 "data_size": 63488 00:25:00.657 }, 00:25:00.657 { 00:25:00.657 "name": "pt2", 00:25:00.657 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:00.657 "is_configured": true, 00:25:00.657 "data_offset": 2048, 00:25:00.657 "data_size": 63488 00:25:00.657 }, 00:25:00.657 { 00:25:00.657 "name": "pt3", 00:25:00.657 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:00.657 "is_configured": true, 00:25:00.657 "data_offset": 2048, 00:25:00.657 "data_size": 63488 00:25:00.657 }, 00:25:00.657 { 00:25:00.657 "name": "pt4", 00:25:00.657 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:00.657 "is_configured": true, 00:25:00.657 "data_offset": 2048, 00:25:00.657 "data_size": 63488 00:25:00.657 } 00:25:00.657 ] 00:25:00.657 }' 00:25:00.657 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:00.657 11:36:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.224 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:01.224 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:01.224 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:01.224 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:01.224 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:01.224 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:01.224 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:01.224 11:36:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:01.483 [2024-07-13 11:36:36.163699] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:01.483 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:01.483 "name": "raid_bdev1", 00:25:01.483 "aliases": [ 00:25:01.483 "3be59a42-e11d-47db-82fa-47275e3f20b4" 00:25:01.483 ], 00:25:01.483 "product_name": "Raid Volume", 00:25:01.483 "block_size": 512, 00:25:01.483 "num_blocks": 253952, 00:25:01.483 "uuid": "3be59a42-e11d-47db-82fa-47275e3f20b4", 00:25:01.483 "assigned_rate_limits": { 00:25:01.483 "rw_ios_per_sec": 0, 00:25:01.483 "rw_mbytes_per_sec": 0, 00:25:01.483 "r_mbytes_per_sec": 0, 00:25:01.483 "w_mbytes_per_sec": 0 00:25:01.483 }, 00:25:01.483 "claimed": false, 00:25:01.483 "zoned": false, 00:25:01.483 "supported_io_types": { 00:25:01.483 "read": true, 00:25:01.483 "write": true, 00:25:01.483 "unmap": true, 00:25:01.483 "flush": true, 00:25:01.483 "reset": true, 00:25:01.483 "nvme_admin": false, 00:25:01.483 "nvme_io": false, 00:25:01.483 "nvme_io_md": false, 00:25:01.483 "write_zeroes": true, 00:25:01.483 "zcopy": false, 00:25:01.483 "get_zone_info": false, 00:25:01.483 "zone_management": false, 00:25:01.483 "zone_append": false, 00:25:01.483 "compare": false, 00:25:01.483 "compare_and_write": false, 00:25:01.483 "abort": false, 00:25:01.483 "seek_hole": false, 00:25:01.483 "seek_data": false, 00:25:01.483 "copy": false, 00:25:01.483 "nvme_iov_md": false 00:25:01.483 }, 00:25:01.483 "memory_domains": [ 00:25:01.483 { 00:25:01.483 "dma_device_id": "system", 00:25:01.483 "dma_device_type": 1 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.483 "dma_device_type": 2 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "dma_device_id": "system", 00:25:01.483 "dma_device_type": 1 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.483 "dma_device_type": 2 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "dma_device_id": "system", 00:25:01.483 "dma_device_type": 1 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.483 "dma_device_type": 2 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "dma_device_id": "system", 00:25:01.483 "dma_device_type": 1 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.483 "dma_device_type": 2 00:25:01.483 } 00:25:01.483 ], 00:25:01.483 "driver_specific": { 00:25:01.483 "raid": { 00:25:01.483 "uuid": "3be59a42-e11d-47db-82fa-47275e3f20b4", 00:25:01.483 "strip_size_kb": 64, 00:25:01.483 "state": "online", 00:25:01.483 "raid_level": "concat", 00:25:01.483 "superblock": true, 00:25:01.483 "num_base_bdevs": 4, 00:25:01.483 "num_base_bdevs_discovered": 4, 00:25:01.483 "num_base_bdevs_operational": 4, 00:25:01.483 "base_bdevs_list": [ 00:25:01.483 { 00:25:01.483 "name": "pt1", 00:25:01.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:01.483 "is_configured": true, 00:25:01.483 "data_offset": 2048, 00:25:01.483 "data_size": 63488 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "name": "pt2", 00:25:01.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:01.483 "is_configured": true, 00:25:01.483 "data_offset": 2048, 00:25:01.483 "data_size": 63488 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "name": "pt3", 00:25:01.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:01.483 "is_configured": true, 00:25:01.483 "data_offset": 2048, 00:25:01.483 "data_size": 63488 00:25:01.483 }, 00:25:01.483 { 00:25:01.483 "name": "pt4", 00:25:01.483 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:01.483 "is_configured": true, 00:25:01.483 "data_offset": 2048, 00:25:01.483 "data_size": 63488 00:25:01.483 } 00:25:01.483 ] 00:25:01.483 } 00:25:01.483 } 00:25:01.483 }' 00:25:01.483 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:01.483 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:01.483 pt2 00:25:01.483 pt3 00:25:01.483 pt4' 00:25:01.483 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:01.483 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:01.483 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:01.741 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:01.741 "name": "pt1", 00:25:01.741 "aliases": [ 00:25:01.741 "00000000-0000-0000-0000-000000000001" 00:25:01.741 ], 00:25:01.741 "product_name": "passthru", 00:25:01.741 "block_size": 512, 00:25:01.741 "num_blocks": 65536, 00:25:01.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:01.741 "assigned_rate_limits": { 00:25:01.741 "rw_ios_per_sec": 0, 00:25:01.741 "rw_mbytes_per_sec": 0, 00:25:01.741 "r_mbytes_per_sec": 0, 00:25:01.741 "w_mbytes_per_sec": 0 00:25:01.741 }, 00:25:01.741 "claimed": true, 00:25:01.741 "claim_type": "exclusive_write", 00:25:01.741 "zoned": false, 00:25:01.741 "supported_io_types": { 00:25:01.741 "read": true, 00:25:01.741 "write": true, 00:25:01.741 "unmap": true, 00:25:01.741 "flush": true, 00:25:01.741 "reset": true, 00:25:01.741 "nvme_admin": false, 00:25:01.741 "nvme_io": false, 00:25:01.741 "nvme_io_md": false, 00:25:01.741 "write_zeroes": true, 00:25:01.741 "zcopy": true, 00:25:01.741 "get_zone_info": false, 00:25:01.741 "zone_management": false, 00:25:01.741 "zone_append": false, 00:25:01.741 "compare": false, 00:25:01.741 "compare_and_write": false, 00:25:01.741 "abort": true, 00:25:01.741 "seek_hole": false, 00:25:01.741 "seek_data": false, 00:25:01.741 "copy": true, 00:25:01.741 "nvme_iov_md": false 00:25:01.741 }, 00:25:01.741 "memory_domains": [ 00:25:01.741 { 00:25:01.741 "dma_device_id": "system", 00:25:01.741 "dma_device_type": 1 00:25:01.741 }, 00:25:01.741 { 00:25:01.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:01.742 "dma_device_type": 2 00:25:01.742 } 00:25:01.742 ], 00:25:01.742 "driver_specific": { 00:25:01.742 "passthru": { 00:25:01.742 "name": "pt1", 00:25:01.742 "base_bdev_name": "malloc1" 00:25:01.742 } 00:25:01.742 } 00:25:01.742 }' 00:25:01.742 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.998 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:01.998 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:01.998 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.998 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:01.998 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:01.998 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:01.998 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:02.256 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:02.256 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.256 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.256 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:02.256 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:02.256 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:02.256 11:36:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:02.513 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:02.513 "name": "pt2", 00:25:02.513 "aliases": [ 00:25:02.513 "00000000-0000-0000-0000-000000000002" 00:25:02.513 ], 00:25:02.513 "product_name": "passthru", 00:25:02.513 "block_size": 512, 00:25:02.513 "num_blocks": 65536, 00:25:02.513 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.513 "assigned_rate_limits": { 00:25:02.513 "rw_ios_per_sec": 0, 00:25:02.513 "rw_mbytes_per_sec": 0, 00:25:02.513 "r_mbytes_per_sec": 0, 00:25:02.513 "w_mbytes_per_sec": 0 00:25:02.513 }, 00:25:02.513 "claimed": true, 00:25:02.513 "claim_type": "exclusive_write", 00:25:02.513 "zoned": false, 00:25:02.513 "supported_io_types": { 00:25:02.513 "read": true, 00:25:02.513 "write": true, 00:25:02.513 "unmap": true, 00:25:02.513 "flush": true, 00:25:02.513 "reset": true, 00:25:02.513 "nvme_admin": false, 00:25:02.513 "nvme_io": false, 00:25:02.513 "nvme_io_md": false, 00:25:02.513 "write_zeroes": true, 00:25:02.513 "zcopy": true, 00:25:02.513 "get_zone_info": false, 00:25:02.513 "zone_management": false, 00:25:02.513 "zone_append": false, 00:25:02.513 "compare": false, 00:25:02.513 "compare_and_write": false, 00:25:02.513 "abort": true, 00:25:02.513 "seek_hole": false, 00:25:02.513 "seek_data": false, 00:25:02.513 "copy": true, 00:25:02.513 "nvme_iov_md": false 00:25:02.513 }, 00:25:02.513 "memory_domains": [ 00:25:02.513 { 00:25:02.513 "dma_device_id": "system", 00:25:02.513 "dma_device_type": 1 00:25:02.513 }, 00:25:02.513 { 00:25:02.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.513 "dma_device_type": 2 00:25:02.513 } 00:25:02.513 ], 00:25:02.513 "driver_specific": { 00:25:02.513 "passthru": { 00:25:02.513 "name": "pt2", 00:25:02.513 "base_bdev_name": "malloc2" 00:25:02.513 } 00:25:02.513 } 00:25:02.513 }' 00:25:02.513 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:02.513 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:02.513 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:02.513 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:02.513 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:02.513 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:02.513 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:02.770 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:02.770 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:02.770 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.770 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:02.770 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:02.770 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:02.770 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:02.770 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:03.027 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:03.027 "name": "pt3", 00:25:03.027 "aliases": [ 00:25:03.027 "00000000-0000-0000-0000-000000000003" 00:25:03.027 ], 00:25:03.027 "product_name": "passthru", 00:25:03.027 "block_size": 512, 00:25:03.027 "num_blocks": 65536, 00:25:03.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:03.027 "assigned_rate_limits": { 00:25:03.027 "rw_ios_per_sec": 0, 00:25:03.027 "rw_mbytes_per_sec": 0, 00:25:03.027 "r_mbytes_per_sec": 0, 00:25:03.027 "w_mbytes_per_sec": 0 00:25:03.027 }, 00:25:03.027 "claimed": true, 00:25:03.027 "claim_type": "exclusive_write", 00:25:03.027 "zoned": false, 00:25:03.027 "supported_io_types": { 00:25:03.027 "read": true, 00:25:03.027 "write": true, 00:25:03.027 "unmap": true, 00:25:03.027 "flush": true, 00:25:03.027 "reset": true, 00:25:03.027 "nvme_admin": false, 00:25:03.027 "nvme_io": false, 00:25:03.027 "nvme_io_md": false, 00:25:03.027 "write_zeroes": true, 00:25:03.027 "zcopy": true, 00:25:03.027 "get_zone_info": false, 00:25:03.027 "zone_management": false, 00:25:03.027 "zone_append": false, 00:25:03.027 "compare": false, 00:25:03.027 "compare_and_write": false, 00:25:03.027 "abort": true, 00:25:03.027 "seek_hole": false, 00:25:03.027 "seek_data": false, 00:25:03.027 "copy": true, 00:25:03.027 "nvme_iov_md": false 00:25:03.027 }, 00:25:03.027 "memory_domains": [ 00:25:03.027 { 00:25:03.027 "dma_device_id": "system", 00:25:03.027 "dma_device_type": 1 00:25:03.027 }, 00:25:03.027 { 00:25:03.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.027 "dma_device_type": 2 00:25:03.027 } 00:25:03.027 ], 00:25:03.027 "driver_specific": { 00:25:03.027 "passthru": { 00:25:03.027 "name": "pt3", 00:25:03.027 "base_bdev_name": "malloc3" 00:25:03.027 } 00:25:03.027 } 00:25:03.027 }' 00:25:03.027 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.027 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.027 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:03.027 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.027 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.027 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:03.027 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.285 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.285 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:03.285 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:03.285 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:03.285 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:03.285 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:03.285 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:03.285 11:36:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:03.543 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:03.543 "name": "pt4", 00:25:03.543 "aliases": [ 00:25:03.543 "00000000-0000-0000-0000-000000000004" 00:25:03.543 ], 00:25:03.543 "product_name": "passthru", 00:25:03.543 "block_size": 512, 00:25:03.543 "num_blocks": 65536, 00:25:03.543 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:03.543 "assigned_rate_limits": { 00:25:03.543 "rw_ios_per_sec": 0, 00:25:03.543 "rw_mbytes_per_sec": 0, 00:25:03.543 "r_mbytes_per_sec": 0, 00:25:03.543 "w_mbytes_per_sec": 0 00:25:03.543 }, 00:25:03.543 "claimed": true, 00:25:03.543 "claim_type": "exclusive_write", 00:25:03.543 "zoned": false, 00:25:03.543 "supported_io_types": { 00:25:03.543 "read": true, 00:25:03.543 "write": true, 00:25:03.543 "unmap": true, 00:25:03.543 "flush": true, 00:25:03.543 "reset": true, 00:25:03.543 "nvme_admin": false, 00:25:03.543 "nvme_io": false, 00:25:03.543 "nvme_io_md": false, 00:25:03.543 "write_zeroes": true, 00:25:03.543 "zcopy": true, 00:25:03.543 "get_zone_info": false, 00:25:03.543 "zone_management": false, 00:25:03.543 "zone_append": false, 00:25:03.543 "compare": false, 00:25:03.543 "compare_and_write": false, 00:25:03.543 "abort": true, 00:25:03.543 "seek_hole": false, 00:25:03.543 "seek_data": false, 00:25:03.543 "copy": true, 00:25:03.543 "nvme_iov_md": false 00:25:03.543 }, 00:25:03.543 "memory_domains": [ 00:25:03.543 { 00:25:03.543 "dma_device_id": "system", 00:25:03.543 "dma_device_type": 1 00:25:03.543 }, 00:25:03.543 { 00:25:03.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.543 "dma_device_type": 2 00:25:03.543 } 00:25:03.543 ], 00:25:03.543 "driver_specific": { 00:25:03.543 "passthru": { 00:25:03.543 "name": "pt4", 00:25:03.543 "base_bdev_name": "malloc4" 00:25:03.543 } 00:25:03.543 } 00:25:03.543 }' 00:25:03.543 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.543 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.543 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:03.543 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.801 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.801 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:03.801 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.801 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.801 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:03.801 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:03.801 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.057 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:04.057 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:04.057 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:04.317 [2024-07-13 11:36:38.832200] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:04.317 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3be59a42-e11d-47db-82fa-47275e3f20b4 00:25:04.317 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3be59a42-e11d-47db-82fa-47275e3f20b4 ']' 00:25:04.317 11:36:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:04.317 [2024-07-13 11:36:39.019992] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:04.317 [2024-07-13 11:36:39.020018] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:04.317 [2024-07-13 11:36:39.020088] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:04.317 [2024-07-13 11:36:39.020157] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:04.317 [2024-07-13 11:36:39.020169] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:25:04.317 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.317 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:04.613 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:04.613 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:04.613 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:04.613 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:04.896 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:04.896 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:05.154 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:05.154 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:05.412 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:05.412 11:36:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:05.412 11:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:05.412 11:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:05.669 11:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:05.669 11:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:05.669 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:05.670 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:05.928 [2024-07-13 11:36:40.562535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:05.928 [2024-07-13 11:36:40.564640] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:05.928 [2024-07-13 11:36:40.564729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:05.928 [2024-07-13 11:36:40.564792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:05.928 [2024-07-13 11:36:40.564853] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:05.928 [2024-07-13 11:36:40.565021] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:05.928 [2024-07-13 11:36:40.565066] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:05.928 [2024-07-13 11:36:40.565120] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:05.928 [2024-07-13 11:36:40.565165] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:05.928 [2024-07-13 11:36:40.565177] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:25:05.928 request: 00:25:05.928 { 00:25:05.928 "name": "raid_bdev1", 00:25:05.928 "raid_level": "concat", 00:25:05.928 "base_bdevs": [ 00:25:05.928 "malloc1", 00:25:05.928 "malloc2", 00:25:05.928 "malloc3", 00:25:05.928 "malloc4" 00:25:05.928 ], 00:25:05.928 "strip_size_kb": 64, 00:25:05.928 "superblock": false, 00:25:05.928 "method": "bdev_raid_create", 00:25:05.928 "req_id": 1 00:25:05.928 } 00:25:05.928 Got JSON-RPC error response 00:25:05.928 response: 00:25:05.928 { 00:25:05.928 "code": -17, 00:25:05.928 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:05.928 } 00:25:05.928 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:05.928 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:05.928 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:05.928 11:36:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:05.928 11:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:05.928 11:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.186 11:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:06.186 11:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:06.186 11:36:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:06.445 [2024-07-13 11:36:40.994548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:06.445 [2024-07-13 11:36:40.994628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:06.445 [2024-07-13 11:36:40.994660] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:25:06.445 [2024-07-13 11:36:40.994701] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:06.445 [2024-07-13 11:36:40.996778] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:06.445 [2024-07-13 11:36:40.996840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:06.445 [2024-07-13 11:36:40.996931] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:06.445 [2024-07-13 11:36:40.996992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:06.445 pt1 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.445 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:06.703 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.703 "name": "raid_bdev1", 00:25:06.703 "uuid": "3be59a42-e11d-47db-82fa-47275e3f20b4", 00:25:06.703 "strip_size_kb": 64, 00:25:06.703 "state": "configuring", 00:25:06.703 "raid_level": "concat", 00:25:06.703 "superblock": true, 00:25:06.703 "num_base_bdevs": 4, 00:25:06.703 "num_base_bdevs_discovered": 1, 00:25:06.703 "num_base_bdevs_operational": 4, 00:25:06.703 "base_bdevs_list": [ 00:25:06.703 { 00:25:06.703 "name": "pt1", 00:25:06.703 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:06.703 "is_configured": true, 00:25:06.703 "data_offset": 2048, 00:25:06.703 "data_size": 63488 00:25:06.703 }, 00:25:06.703 { 00:25:06.703 "name": null, 00:25:06.703 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:06.703 "is_configured": false, 00:25:06.703 "data_offset": 2048, 00:25:06.703 "data_size": 63488 00:25:06.703 }, 00:25:06.703 { 00:25:06.703 "name": null, 00:25:06.703 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:06.703 "is_configured": false, 00:25:06.703 "data_offset": 2048, 00:25:06.703 "data_size": 63488 00:25:06.703 }, 00:25:06.703 { 00:25:06.703 "name": null, 00:25:06.703 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:06.703 "is_configured": false, 00:25:06.703 "data_offset": 2048, 00:25:06.703 "data_size": 63488 00:25:06.703 } 00:25:06.703 ] 00:25:06.703 }' 00:25:06.703 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.703 11:36:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.269 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:25:07.269 11:36:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:07.527 [2024-07-13 11:36:42.086751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:07.527 [2024-07-13 11:36:42.086816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:07.527 [2024-07-13 11:36:42.086882] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:07.527 [2024-07-13 11:36:42.086920] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:07.527 [2024-07-13 11:36:42.087378] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:07.527 [2024-07-13 11:36:42.087419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:07.527 [2024-07-13 11:36:42.087515] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:07.527 [2024-07-13 11:36:42.087548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:07.527 pt2 00:25:07.527 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:07.786 [2024-07-13 11:36:42.362836] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.786 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.044 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:08.044 "name": "raid_bdev1", 00:25:08.044 "uuid": "3be59a42-e11d-47db-82fa-47275e3f20b4", 00:25:08.044 "strip_size_kb": 64, 00:25:08.044 "state": "configuring", 00:25:08.044 "raid_level": "concat", 00:25:08.044 "superblock": true, 00:25:08.044 "num_base_bdevs": 4, 00:25:08.044 "num_base_bdevs_discovered": 1, 00:25:08.044 "num_base_bdevs_operational": 4, 00:25:08.044 "base_bdevs_list": [ 00:25:08.044 { 00:25:08.044 "name": "pt1", 00:25:08.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:08.044 "is_configured": true, 00:25:08.044 "data_offset": 2048, 00:25:08.044 "data_size": 63488 00:25:08.044 }, 00:25:08.044 { 00:25:08.044 "name": null, 00:25:08.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:08.044 "is_configured": false, 00:25:08.044 "data_offset": 2048, 00:25:08.044 "data_size": 63488 00:25:08.044 }, 00:25:08.044 { 00:25:08.044 "name": null, 00:25:08.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:08.044 "is_configured": false, 00:25:08.044 "data_offset": 2048, 00:25:08.044 "data_size": 63488 00:25:08.044 }, 00:25:08.044 { 00:25:08.044 "name": null, 00:25:08.044 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:08.044 "is_configured": false, 00:25:08.045 "data_offset": 2048, 00:25:08.045 "data_size": 63488 00:25:08.045 } 00:25:08.045 ] 00:25:08.045 }' 00:25:08.045 11:36:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:08.045 11:36:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.610 11:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:08.610 11:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:08.610 11:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:08.868 [2024-07-13 11:36:43.543119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:08.868 [2024-07-13 11:36:43.543235] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.868 [2024-07-13 11:36:43.543288] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:08.868 [2024-07-13 11:36:43.543346] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.868 [2024-07-13 11:36:43.543865] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.868 [2024-07-13 11:36:43.543958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:08.868 [2024-07-13 11:36:43.544049] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:08.868 [2024-07-13 11:36:43.544091] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:08.868 pt2 00:25:08.868 11:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:08.868 11:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:08.868 11:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:09.126 [2024-07-13 11:36:43.823155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:09.126 [2024-07-13 11:36:43.823234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.126 [2024-07-13 11:36:43.823262] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:09.126 [2024-07-13 11:36:43.823302] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.126 [2024-07-13 11:36:43.823750] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.126 [2024-07-13 11:36:43.823807] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:09.126 [2024-07-13 11:36:43.823898] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:09.126 [2024-07-13 11:36:43.823924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:09.126 pt3 00:25:09.126 11:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:09.126 11:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:09.126 11:36:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:09.384 [2024-07-13 11:36:44.015164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:09.384 [2024-07-13 11:36:44.015253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.384 [2024-07-13 11:36:44.015281] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:09.384 [2024-07-13 11:36:44.015326] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.384 [2024-07-13 11:36:44.015756] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.384 [2024-07-13 11:36:44.015804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:09.384 [2024-07-13 11:36:44.015887] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:09.384 [2024-07-13 11:36:44.015919] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:09.384 [2024-07-13 11:36:44.016053] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:25:09.384 [2024-07-13 11:36:44.016066] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:09.384 [2024-07-13 11:36:44.016165] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:25:09.384 [2024-07-13 11:36:44.016541] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:25:09.384 [2024-07-13 11:36:44.016566] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:25:09.384 [2024-07-13 11:36:44.016758] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.384 pt4 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.384 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.640 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.640 "name": "raid_bdev1", 00:25:09.641 "uuid": "3be59a42-e11d-47db-82fa-47275e3f20b4", 00:25:09.641 "strip_size_kb": 64, 00:25:09.641 "state": "online", 00:25:09.641 "raid_level": "concat", 00:25:09.641 "superblock": true, 00:25:09.641 "num_base_bdevs": 4, 00:25:09.641 "num_base_bdevs_discovered": 4, 00:25:09.641 "num_base_bdevs_operational": 4, 00:25:09.641 "base_bdevs_list": [ 00:25:09.641 { 00:25:09.641 "name": "pt1", 00:25:09.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:09.641 "is_configured": true, 00:25:09.641 "data_offset": 2048, 00:25:09.641 "data_size": 63488 00:25:09.641 }, 00:25:09.641 { 00:25:09.641 "name": "pt2", 00:25:09.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:09.641 "is_configured": true, 00:25:09.641 "data_offset": 2048, 00:25:09.641 "data_size": 63488 00:25:09.641 }, 00:25:09.641 { 00:25:09.641 "name": "pt3", 00:25:09.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:09.641 "is_configured": true, 00:25:09.641 "data_offset": 2048, 00:25:09.641 "data_size": 63488 00:25:09.641 }, 00:25:09.641 { 00:25:09.641 "name": "pt4", 00:25:09.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:09.641 "is_configured": true, 00:25:09.641 "data_offset": 2048, 00:25:09.641 "data_size": 63488 00:25:09.641 } 00:25:09.641 ] 00:25:09.641 }' 00:25:09.641 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.641 11:36:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:10.574 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:10.574 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:10.574 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:10.574 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:10.574 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:10.574 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:10.574 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:10.574 11:36:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:10.574 [2024-07-13 11:36:45.211756] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:10.574 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:10.574 "name": "raid_bdev1", 00:25:10.574 "aliases": [ 00:25:10.574 "3be59a42-e11d-47db-82fa-47275e3f20b4" 00:25:10.574 ], 00:25:10.574 "product_name": "Raid Volume", 00:25:10.574 "block_size": 512, 00:25:10.574 "num_blocks": 253952, 00:25:10.574 "uuid": "3be59a42-e11d-47db-82fa-47275e3f20b4", 00:25:10.574 "assigned_rate_limits": { 00:25:10.574 "rw_ios_per_sec": 0, 00:25:10.574 "rw_mbytes_per_sec": 0, 00:25:10.574 "r_mbytes_per_sec": 0, 00:25:10.574 "w_mbytes_per_sec": 0 00:25:10.574 }, 00:25:10.574 "claimed": false, 00:25:10.574 "zoned": false, 00:25:10.574 "supported_io_types": { 00:25:10.574 "read": true, 00:25:10.574 "write": true, 00:25:10.574 "unmap": true, 00:25:10.574 "flush": true, 00:25:10.574 "reset": true, 00:25:10.574 "nvme_admin": false, 00:25:10.574 "nvme_io": false, 00:25:10.574 "nvme_io_md": false, 00:25:10.574 "write_zeroes": true, 00:25:10.574 "zcopy": false, 00:25:10.574 "get_zone_info": false, 00:25:10.574 "zone_management": false, 00:25:10.574 "zone_append": false, 00:25:10.574 "compare": false, 00:25:10.574 "compare_and_write": false, 00:25:10.574 "abort": false, 00:25:10.574 "seek_hole": false, 00:25:10.574 "seek_data": false, 00:25:10.574 "copy": false, 00:25:10.574 "nvme_iov_md": false 00:25:10.574 }, 00:25:10.574 "memory_domains": [ 00:25:10.574 { 00:25:10.574 "dma_device_id": "system", 00:25:10.574 "dma_device_type": 1 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.574 "dma_device_type": 2 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "dma_device_id": "system", 00:25:10.574 "dma_device_type": 1 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.574 "dma_device_type": 2 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "dma_device_id": "system", 00:25:10.574 "dma_device_type": 1 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.574 "dma_device_type": 2 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "dma_device_id": "system", 00:25:10.574 "dma_device_type": 1 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.574 "dma_device_type": 2 00:25:10.574 } 00:25:10.574 ], 00:25:10.574 "driver_specific": { 00:25:10.574 "raid": { 00:25:10.574 "uuid": "3be59a42-e11d-47db-82fa-47275e3f20b4", 00:25:10.574 "strip_size_kb": 64, 00:25:10.574 "state": "online", 00:25:10.574 "raid_level": "concat", 00:25:10.574 "superblock": true, 00:25:10.574 "num_base_bdevs": 4, 00:25:10.574 "num_base_bdevs_discovered": 4, 00:25:10.574 "num_base_bdevs_operational": 4, 00:25:10.574 "base_bdevs_list": [ 00:25:10.574 { 00:25:10.574 "name": "pt1", 00:25:10.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:10.574 "is_configured": true, 00:25:10.574 "data_offset": 2048, 00:25:10.574 "data_size": 63488 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "name": "pt2", 00:25:10.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:10.574 "is_configured": true, 00:25:10.574 "data_offset": 2048, 00:25:10.574 "data_size": 63488 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "name": "pt3", 00:25:10.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:10.574 "is_configured": true, 00:25:10.574 "data_offset": 2048, 00:25:10.574 "data_size": 63488 00:25:10.574 }, 00:25:10.574 { 00:25:10.574 "name": "pt4", 00:25:10.574 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:10.574 "is_configured": true, 00:25:10.574 "data_offset": 2048, 00:25:10.574 "data_size": 63488 00:25:10.574 } 00:25:10.574 ] 00:25:10.574 } 00:25:10.574 } 00:25:10.574 }' 00:25:10.574 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:10.574 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:10.574 pt2 00:25:10.574 pt3 00:25:10.574 pt4' 00:25:10.574 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:10.574 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:10.574 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:10.833 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:10.833 "name": "pt1", 00:25:10.833 "aliases": [ 00:25:10.833 "00000000-0000-0000-0000-000000000001" 00:25:10.833 ], 00:25:10.833 "product_name": "passthru", 00:25:10.833 "block_size": 512, 00:25:10.833 "num_blocks": 65536, 00:25:10.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:10.833 "assigned_rate_limits": { 00:25:10.833 "rw_ios_per_sec": 0, 00:25:10.833 "rw_mbytes_per_sec": 0, 00:25:10.833 "r_mbytes_per_sec": 0, 00:25:10.833 "w_mbytes_per_sec": 0 00:25:10.833 }, 00:25:10.833 "claimed": true, 00:25:10.833 "claim_type": "exclusive_write", 00:25:10.833 "zoned": false, 00:25:10.833 "supported_io_types": { 00:25:10.833 "read": true, 00:25:10.833 "write": true, 00:25:10.833 "unmap": true, 00:25:10.833 "flush": true, 00:25:10.833 "reset": true, 00:25:10.833 "nvme_admin": false, 00:25:10.833 "nvme_io": false, 00:25:10.833 "nvme_io_md": false, 00:25:10.833 "write_zeroes": true, 00:25:10.833 "zcopy": true, 00:25:10.833 "get_zone_info": false, 00:25:10.833 "zone_management": false, 00:25:10.833 "zone_append": false, 00:25:10.833 "compare": false, 00:25:10.833 "compare_and_write": false, 00:25:10.833 "abort": true, 00:25:10.833 "seek_hole": false, 00:25:10.833 "seek_data": false, 00:25:10.833 "copy": true, 00:25:10.833 "nvme_iov_md": false 00:25:10.833 }, 00:25:10.833 "memory_domains": [ 00:25:10.833 { 00:25:10.833 "dma_device_id": "system", 00:25:10.833 "dma_device_type": 1 00:25:10.833 }, 00:25:10.833 { 00:25:10.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.833 "dma_device_type": 2 00:25:10.833 } 00:25:10.833 ], 00:25:10.833 "driver_specific": { 00:25:10.833 "passthru": { 00:25:10.833 "name": "pt1", 00:25:10.833 "base_bdev_name": "malloc1" 00:25:10.833 } 00:25:10.833 } 00:25:10.833 }' 00:25:10.833 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:10.833 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:11.091 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:11.091 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:11.091 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:11.091 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:11.091 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:11.091 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:11.092 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:11.092 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:11.350 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:11.350 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:11.350 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:11.350 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:11.350 11:36:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:11.608 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:11.608 "name": "pt2", 00:25:11.608 "aliases": [ 00:25:11.608 "00000000-0000-0000-0000-000000000002" 00:25:11.608 ], 00:25:11.608 "product_name": "passthru", 00:25:11.608 "block_size": 512, 00:25:11.608 "num_blocks": 65536, 00:25:11.608 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:11.608 "assigned_rate_limits": { 00:25:11.608 "rw_ios_per_sec": 0, 00:25:11.608 "rw_mbytes_per_sec": 0, 00:25:11.608 "r_mbytes_per_sec": 0, 00:25:11.608 "w_mbytes_per_sec": 0 00:25:11.608 }, 00:25:11.608 "claimed": true, 00:25:11.608 "claim_type": "exclusive_write", 00:25:11.608 "zoned": false, 00:25:11.608 "supported_io_types": { 00:25:11.608 "read": true, 00:25:11.608 "write": true, 00:25:11.608 "unmap": true, 00:25:11.608 "flush": true, 00:25:11.608 "reset": true, 00:25:11.608 "nvme_admin": false, 00:25:11.608 "nvme_io": false, 00:25:11.608 "nvme_io_md": false, 00:25:11.608 "write_zeroes": true, 00:25:11.608 "zcopy": true, 00:25:11.608 "get_zone_info": false, 00:25:11.608 "zone_management": false, 00:25:11.608 "zone_append": false, 00:25:11.608 "compare": false, 00:25:11.608 "compare_and_write": false, 00:25:11.608 "abort": true, 00:25:11.608 "seek_hole": false, 00:25:11.608 "seek_data": false, 00:25:11.608 "copy": true, 00:25:11.608 "nvme_iov_md": false 00:25:11.608 }, 00:25:11.608 "memory_domains": [ 00:25:11.608 { 00:25:11.608 "dma_device_id": "system", 00:25:11.608 "dma_device_type": 1 00:25:11.608 }, 00:25:11.608 { 00:25:11.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.608 "dma_device_type": 2 00:25:11.608 } 00:25:11.608 ], 00:25:11.608 "driver_specific": { 00:25:11.608 "passthru": { 00:25:11.608 "name": "pt2", 00:25:11.608 "base_bdev_name": "malloc2" 00:25:11.608 } 00:25:11.608 } 00:25:11.608 }' 00:25:11.608 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:11.608 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:11.608 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:11.608 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:11.608 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:11.608 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:11.866 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:11.866 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:11.866 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:11.866 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:11.866 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:11.866 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:11.866 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:11.866 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:11.866 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.124 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.124 "name": "pt3", 00:25:12.124 "aliases": [ 00:25:12.124 "00000000-0000-0000-0000-000000000003" 00:25:12.124 ], 00:25:12.124 "product_name": "passthru", 00:25:12.124 "block_size": 512, 00:25:12.124 "num_blocks": 65536, 00:25:12.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:12.124 "assigned_rate_limits": { 00:25:12.124 "rw_ios_per_sec": 0, 00:25:12.124 "rw_mbytes_per_sec": 0, 00:25:12.124 "r_mbytes_per_sec": 0, 00:25:12.124 "w_mbytes_per_sec": 0 00:25:12.124 }, 00:25:12.124 "claimed": true, 00:25:12.124 "claim_type": "exclusive_write", 00:25:12.124 "zoned": false, 00:25:12.124 "supported_io_types": { 00:25:12.124 "read": true, 00:25:12.124 "write": true, 00:25:12.124 "unmap": true, 00:25:12.124 "flush": true, 00:25:12.124 "reset": true, 00:25:12.124 "nvme_admin": false, 00:25:12.124 "nvme_io": false, 00:25:12.124 "nvme_io_md": false, 00:25:12.124 "write_zeroes": true, 00:25:12.124 "zcopy": true, 00:25:12.124 "get_zone_info": false, 00:25:12.124 "zone_management": false, 00:25:12.124 "zone_append": false, 00:25:12.124 "compare": false, 00:25:12.124 "compare_and_write": false, 00:25:12.124 "abort": true, 00:25:12.124 "seek_hole": false, 00:25:12.124 "seek_data": false, 00:25:12.124 "copy": true, 00:25:12.124 "nvme_iov_md": false 00:25:12.124 }, 00:25:12.124 "memory_domains": [ 00:25:12.124 { 00:25:12.124 "dma_device_id": "system", 00:25:12.124 "dma_device_type": 1 00:25:12.124 }, 00:25:12.124 { 00:25:12.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.124 "dma_device_type": 2 00:25:12.124 } 00:25:12.124 ], 00:25:12.124 "driver_specific": { 00:25:12.124 "passthru": { 00:25:12.124 "name": "pt3", 00:25:12.124 "base_bdev_name": "malloc3" 00:25:12.124 } 00:25:12.124 } 00:25:12.124 }' 00:25:12.124 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.382 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.382 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.382 11:36:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.382 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.382 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:12.382 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.641 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.641 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:12.641 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.641 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.641 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:12.641 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:12.641 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:12.641 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.900 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.900 "name": "pt4", 00:25:12.900 "aliases": [ 00:25:12.900 "00000000-0000-0000-0000-000000000004" 00:25:12.900 ], 00:25:12.900 "product_name": "passthru", 00:25:12.900 "block_size": 512, 00:25:12.900 "num_blocks": 65536, 00:25:12.900 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:12.900 "assigned_rate_limits": { 00:25:12.900 "rw_ios_per_sec": 0, 00:25:12.900 "rw_mbytes_per_sec": 0, 00:25:12.900 "r_mbytes_per_sec": 0, 00:25:12.900 "w_mbytes_per_sec": 0 00:25:12.900 }, 00:25:12.900 "claimed": true, 00:25:12.900 "claim_type": "exclusive_write", 00:25:12.900 "zoned": false, 00:25:12.900 "supported_io_types": { 00:25:12.900 "read": true, 00:25:12.900 "write": true, 00:25:12.900 "unmap": true, 00:25:12.900 "flush": true, 00:25:12.900 "reset": true, 00:25:12.900 "nvme_admin": false, 00:25:12.900 "nvme_io": false, 00:25:12.900 "nvme_io_md": false, 00:25:12.900 "write_zeroes": true, 00:25:12.900 "zcopy": true, 00:25:12.900 "get_zone_info": false, 00:25:12.900 "zone_management": false, 00:25:12.900 "zone_append": false, 00:25:12.900 "compare": false, 00:25:12.900 "compare_and_write": false, 00:25:12.900 "abort": true, 00:25:12.900 "seek_hole": false, 00:25:12.900 "seek_data": false, 00:25:12.900 "copy": true, 00:25:12.900 "nvme_iov_md": false 00:25:12.900 }, 00:25:12.900 "memory_domains": [ 00:25:12.900 { 00:25:12.900 "dma_device_id": "system", 00:25:12.900 "dma_device_type": 1 00:25:12.900 }, 00:25:12.900 { 00:25:12.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.900 "dma_device_type": 2 00:25:12.900 } 00:25:12.900 ], 00:25:12.900 "driver_specific": { 00:25:12.900 "passthru": { 00:25:12.900 "name": "pt4", 00:25:12.900 "base_bdev_name": "malloc4" 00:25:12.900 } 00:25:12.900 } 00:25:12.900 }' 00:25:12.900 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.159 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.159 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:13.159 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.159 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.159 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.159 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.417 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.417 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.417 11:36:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.417 11:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.417 11:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.417 11:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:13.417 11:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:13.676 [2024-07-13 11:36:48.336303] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3be59a42-e11d-47db-82fa-47275e3f20b4 '!=' 3be59a42-e11d-47db-82fa-47275e3f20b4 ']' 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 140073 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 140073 ']' 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 140073 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140073 00:25:13.676 killing process with pid 140073 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140073' 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 140073 00:25:13.676 11:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 140073 00:25:13.676 [2024-07-13 11:36:48.364329] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:13.676 [2024-07-13 11:36:48.364402] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:13.676 [2024-07-13 11:36:48.364467] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:13.676 [2024-07-13 11:36:48.364485] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:25:13.934 [2024-07-13 11:36:48.631941] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:15.310 ************************************ 00:25:15.310 END TEST raid_superblock_test 00:25:15.310 ************************************ 00:25:15.310 11:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:25:15.310 00:25:15.310 real 0m17.634s 00:25:15.311 user 0m32.162s 00:25:15.311 sys 0m1.987s 00:25:15.311 11:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:15.311 11:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.311 11:36:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:15.311 11:36:49 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:25:15.311 11:36:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:15.311 11:36:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:15.311 11:36:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:15.311 ************************************ 00:25:15.311 START TEST raid_read_error_test 00:25:15.311 ************************************ 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.vS1OBZk7uu 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=140654 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 140654 /var/tmp/spdk-raid.sock 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 140654 ']' 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:15.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:15.311 11:36:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.311 [2024-07-13 11:36:49.793480] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:15.311 [2024-07-13 11:36:49.793689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140654 ] 00:25:15.311 [2024-07-13 11:36:49.965370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.570 [2024-07-13 11:36:50.175079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.828 [2024-07-13 11:36:50.364661] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:16.087 11:36:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.087 11:36:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:25:16.087 11:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:16.087 11:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:16.345 BaseBdev1_malloc 00:25:16.345 11:36:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:16.603 true 00:25:16.603 11:36:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:16.603 [2024-07-13 11:36:51.334491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:16.603 [2024-07-13 11:36:51.334583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:16.603 [2024-07-13 11:36:51.334616] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:16.603 [2024-07-13 11:36:51.334636] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:16.603 [2024-07-13 11:36:51.336462] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:16.603 [2024-07-13 11:36:51.336507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:16.603 BaseBdev1 00:25:16.603 11:36:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:16.603 11:36:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:17.170 BaseBdev2_malloc 00:25:17.170 11:36:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:17.170 true 00:25:17.170 11:36:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:17.429 [2024-07-13 11:36:52.095948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:17.429 [2024-07-13 11:36:52.096037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:17.429 [2024-07-13 11:36:52.096079] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:17.429 [2024-07-13 11:36:52.096100] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:17.429 [2024-07-13 11:36:52.098273] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:17.429 [2024-07-13 11:36:52.098320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:17.429 BaseBdev2 00:25:17.429 11:36:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:17.429 11:36:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:17.687 BaseBdev3_malloc 00:25:17.687 11:36:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:17.944 true 00:25:17.944 11:36:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:18.203 [2024-07-13 11:36:52.756402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:18.203 [2024-07-13 11:36:52.756502] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.203 [2024-07-13 11:36:52.756538] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:18.203 [2024-07-13 11:36:52.756564] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.203 [2024-07-13 11:36:52.758514] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.203 [2024-07-13 11:36:52.758564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:18.203 BaseBdev3 00:25:18.203 11:36:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:18.203 11:36:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:18.462 BaseBdev4_malloc 00:25:18.462 11:36:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:18.462 true 00:25:18.462 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:18.720 [2024-07-13 11:36:53.401509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:18.720 [2024-07-13 11:36:53.401593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.720 [2024-07-13 11:36:53.401626] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:18.720 [2024-07-13 11:36:53.401649] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.720 [2024-07-13 11:36:53.403822] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.720 [2024-07-13 11:36:53.403873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:18.720 BaseBdev4 00:25:18.720 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:18.978 [2024-07-13 11:36:53.589586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:18.978 [2024-07-13 11:36:53.591486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:18.978 [2024-07-13 11:36:53.591581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:18.978 [2024-07-13 11:36:53.591650] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:18.978 [2024-07-13 11:36:53.591882] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:25:18.978 [2024-07-13 11:36:53.591904] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:18.978 [2024-07-13 11:36:53.592013] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:18.978 [2024-07-13 11:36:53.592389] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:25:18.978 [2024-07-13 11:36:53.592418] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:25:18.978 [2024-07-13 11:36:53.592560] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:18.978 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.236 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.236 "name": "raid_bdev1", 00:25:19.236 "uuid": "12068486-700d-4dd4-aa9e-fb8726302844", 00:25:19.236 "strip_size_kb": 64, 00:25:19.236 "state": "online", 00:25:19.236 "raid_level": "concat", 00:25:19.236 "superblock": true, 00:25:19.236 "num_base_bdevs": 4, 00:25:19.236 "num_base_bdevs_discovered": 4, 00:25:19.236 "num_base_bdevs_operational": 4, 00:25:19.236 "base_bdevs_list": [ 00:25:19.236 { 00:25:19.236 "name": "BaseBdev1", 00:25:19.236 "uuid": "3d416e28-b7c9-5565-80ca-af71937ab204", 00:25:19.236 "is_configured": true, 00:25:19.236 "data_offset": 2048, 00:25:19.236 "data_size": 63488 00:25:19.236 }, 00:25:19.236 { 00:25:19.236 "name": "BaseBdev2", 00:25:19.236 "uuid": "a3a94045-ed33-5b17-8da5-f2764837af67", 00:25:19.236 "is_configured": true, 00:25:19.236 "data_offset": 2048, 00:25:19.236 "data_size": 63488 00:25:19.236 }, 00:25:19.236 { 00:25:19.236 "name": "BaseBdev3", 00:25:19.236 "uuid": "6bcec120-6c48-57b1-8e66-01f6990f6b9b", 00:25:19.236 "is_configured": true, 00:25:19.236 "data_offset": 2048, 00:25:19.236 "data_size": 63488 00:25:19.236 }, 00:25:19.236 { 00:25:19.236 "name": "BaseBdev4", 00:25:19.236 "uuid": "70c96c59-4698-5941-9e34-d1299be7bea5", 00:25:19.236 "is_configured": true, 00:25:19.236 "data_offset": 2048, 00:25:19.236 "data_size": 63488 00:25:19.236 } 00:25:19.236 ] 00:25:19.236 }' 00:25:19.236 11:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.236 11:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:19.802 11:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:25:19.802 11:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:20.061 [2024-07-13 11:36:54.560726] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.997 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.256 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:21.256 "name": "raid_bdev1", 00:25:21.256 "uuid": "12068486-700d-4dd4-aa9e-fb8726302844", 00:25:21.256 "strip_size_kb": 64, 00:25:21.256 "state": "online", 00:25:21.256 "raid_level": "concat", 00:25:21.256 "superblock": true, 00:25:21.256 "num_base_bdevs": 4, 00:25:21.256 "num_base_bdevs_discovered": 4, 00:25:21.256 "num_base_bdevs_operational": 4, 00:25:21.256 "base_bdevs_list": [ 00:25:21.256 { 00:25:21.256 "name": "BaseBdev1", 00:25:21.256 "uuid": "3d416e28-b7c9-5565-80ca-af71937ab204", 00:25:21.256 "is_configured": true, 00:25:21.256 "data_offset": 2048, 00:25:21.256 "data_size": 63488 00:25:21.256 }, 00:25:21.256 { 00:25:21.256 "name": "BaseBdev2", 00:25:21.256 "uuid": "a3a94045-ed33-5b17-8da5-f2764837af67", 00:25:21.256 "is_configured": true, 00:25:21.256 "data_offset": 2048, 00:25:21.256 "data_size": 63488 00:25:21.256 }, 00:25:21.256 { 00:25:21.256 "name": "BaseBdev3", 00:25:21.256 "uuid": "6bcec120-6c48-57b1-8e66-01f6990f6b9b", 00:25:21.256 "is_configured": true, 00:25:21.256 "data_offset": 2048, 00:25:21.256 "data_size": 63488 00:25:21.256 }, 00:25:21.256 { 00:25:21.256 "name": "BaseBdev4", 00:25:21.256 "uuid": "70c96c59-4698-5941-9e34-d1299be7bea5", 00:25:21.256 "is_configured": true, 00:25:21.257 "data_offset": 2048, 00:25:21.257 "data_size": 63488 00:25:21.257 } 00:25:21.257 ] 00:25:21.257 }' 00:25:21.257 11:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:21.257 11:36:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:22.194 [2024-07-13 11:36:56.841146] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:22.194 [2024-07-13 11:36:56.841201] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:22.194 [2024-07-13 11:36:56.843657] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.194 [2024-07-13 11:36:56.843711] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:22.194 [2024-07-13 11:36:56.843759] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:22.194 [2024-07-13 11:36:56.843778] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:25:22.194 0 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 140654 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 140654 ']' 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 140654 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140654 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:22.194 killing process with pid 140654 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140654' 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 140654 00:25:22.194 11:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 140654 00:25:22.194 [2024-07-13 11:36:56.879359] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:22.453 [2024-07-13 11:36:57.103019] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.vS1OBZk7uu 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:25:23.831 00:25:23.831 real 0m8.463s 00:25:23.831 user 0m13.152s 00:25:23.831 sys 0m0.944s 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:23.831 11:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.831 ************************************ 00:25:23.831 END TEST raid_read_error_test 00:25:23.831 ************************************ 00:25:23.831 11:36:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:23.831 11:36:58 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:25:23.831 11:36:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:23.831 11:36:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:23.831 11:36:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:23.831 ************************************ 00:25:23.831 START TEST raid_write_error_test 00:25:23.831 ************************************ 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.iIvtPu7hjh 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=140879 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 140879 /var/tmp/spdk-raid.sock 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 140879 ']' 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:23.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:23.831 11:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:23.831 [2024-07-13 11:36:58.315035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:23.832 [2024-07-13 11:36:58.315234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140879 ] 00:25:23.832 [2024-07-13 11:36:58.483189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.090 [2024-07-13 11:36:58.666928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.349 [2024-07-13 11:36:58.854715] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:24.608 11:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:24.608 11:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:25:24.608 11:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:24.608 11:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:24.866 BaseBdev1_malloc 00:25:24.866 11:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:25.125 true 00:25:25.125 11:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:25.383 [2024-07-13 11:36:59.908529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:25.383 [2024-07-13 11:36:59.908622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.383 [2024-07-13 11:36:59.908658] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:25.383 [2024-07-13 11:36:59.908678] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.383 [2024-07-13 11:36:59.910937] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.383 [2024-07-13 11:36:59.910982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:25.383 BaseBdev1 00:25:25.383 11:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:25.383 11:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:25.642 BaseBdev2_malloc 00:25:25.642 11:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:25.642 true 00:25:25.642 11:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:25.900 [2024-07-13 11:37:00.550715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:25.900 [2024-07-13 11:37:00.550803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:25.900 [2024-07-13 11:37:00.550862] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:25.900 [2024-07-13 11:37:00.550888] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:25.900 [2024-07-13 11:37:00.553136] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:25.900 [2024-07-13 11:37:00.553182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:25.900 BaseBdev2 00:25:25.900 11:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:25.900 11:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:26.158 BaseBdev3_malloc 00:25:26.158 11:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:26.418 true 00:25:26.418 11:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:26.677 [2024-07-13 11:37:01.215291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:26.677 [2024-07-13 11:37:01.215373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.677 [2024-07-13 11:37:01.215414] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:26.677 [2024-07-13 11:37:01.215437] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.677 [2024-07-13 11:37:01.217647] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.677 [2024-07-13 11:37:01.217697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:26.677 BaseBdev3 00:25:26.677 11:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:26.677 11:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:26.935 BaseBdev4_malloc 00:25:26.935 11:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:26.935 true 00:25:26.935 11:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:27.192 [2024-07-13 11:37:01.806336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:27.192 [2024-07-13 11:37:01.806421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.192 [2024-07-13 11:37:01.806455] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:27.193 [2024-07-13 11:37:01.806480] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.193 [2024-07-13 11:37:01.808451] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.193 [2024-07-13 11:37:01.808501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:27.193 BaseBdev4 00:25:27.193 11:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:27.450 [2024-07-13 11:37:02.038431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:27.450 [2024-07-13 11:37:02.040371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:27.450 [2024-07-13 11:37:02.040464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:27.450 [2024-07-13 11:37:02.040534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:27.450 [2024-07-13 11:37:02.040769] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:25:27.450 [2024-07-13 11:37:02.040791] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:27.450 [2024-07-13 11:37:02.040961] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:27.450 [2024-07-13 11:37:02.041329] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:25:27.450 [2024-07-13 11:37:02.041352] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:25:27.450 [2024-07-13 11:37:02.041493] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.450 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:27.706 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:27.706 "name": "raid_bdev1", 00:25:27.706 "uuid": "5dccd19c-26c1-411a-8d6a-ba755aa83354", 00:25:27.706 "strip_size_kb": 64, 00:25:27.706 "state": "online", 00:25:27.706 "raid_level": "concat", 00:25:27.706 "superblock": true, 00:25:27.706 "num_base_bdevs": 4, 00:25:27.706 "num_base_bdevs_discovered": 4, 00:25:27.706 "num_base_bdevs_operational": 4, 00:25:27.706 "base_bdevs_list": [ 00:25:27.706 { 00:25:27.706 "name": "BaseBdev1", 00:25:27.706 "uuid": "5403d725-953a-5d0d-9afa-46b12c237032", 00:25:27.706 "is_configured": true, 00:25:27.706 "data_offset": 2048, 00:25:27.706 "data_size": 63488 00:25:27.706 }, 00:25:27.706 { 00:25:27.706 "name": "BaseBdev2", 00:25:27.707 "uuid": "8601cdcb-55df-5a37-a211-512bd02b4678", 00:25:27.707 "is_configured": true, 00:25:27.707 "data_offset": 2048, 00:25:27.707 "data_size": 63488 00:25:27.707 }, 00:25:27.707 { 00:25:27.707 "name": "BaseBdev3", 00:25:27.707 "uuid": "78961d68-0195-5073-a022-9009af0e5965", 00:25:27.707 "is_configured": true, 00:25:27.707 "data_offset": 2048, 00:25:27.707 "data_size": 63488 00:25:27.707 }, 00:25:27.707 { 00:25:27.707 "name": "BaseBdev4", 00:25:27.707 "uuid": "fafc7778-95be-54bd-adbb-38c72ed07e43", 00:25:27.707 "is_configured": true, 00:25:27.707 "data_offset": 2048, 00:25:27.707 "data_size": 63488 00:25:27.707 } 00:25:27.707 ] 00:25:27.707 }' 00:25:27.707 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:27.707 11:37:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.271 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:25:28.271 11:37:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:28.271 [2024-07-13 11:37:02.979650] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:25:29.204 11:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.463 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:29.722 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.722 "name": "raid_bdev1", 00:25:29.722 "uuid": "5dccd19c-26c1-411a-8d6a-ba755aa83354", 00:25:29.722 "strip_size_kb": 64, 00:25:29.722 "state": "online", 00:25:29.722 "raid_level": "concat", 00:25:29.722 "superblock": true, 00:25:29.722 "num_base_bdevs": 4, 00:25:29.722 "num_base_bdevs_discovered": 4, 00:25:29.722 "num_base_bdevs_operational": 4, 00:25:29.722 "base_bdevs_list": [ 00:25:29.722 { 00:25:29.722 "name": "BaseBdev1", 00:25:29.722 "uuid": "5403d725-953a-5d0d-9afa-46b12c237032", 00:25:29.722 "is_configured": true, 00:25:29.722 "data_offset": 2048, 00:25:29.722 "data_size": 63488 00:25:29.722 }, 00:25:29.722 { 00:25:29.722 "name": "BaseBdev2", 00:25:29.722 "uuid": "8601cdcb-55df-5a37-a211-512bd02b4678", 00:25:29.722 "is_configured": true, 00:25:29.722 "data_offset": 2048, 00:25:29.722 "data_size": 63488 00:25:29.722 }, 00:25:29.722 { 00:25:29.722 "name": "BaseBdev3", 00:25:29.722 "uuid": "78961d68-0195-5073-a022-9009af0e5965", 00:25:29.722 "is_configured": true, 00:25:29.722 "data_offset": 2048, 00:25:29.722 "data_size": 63488 00:25:29.722 }, 00:25:29.722 { 00:25:29.722 "name": "BaseBdev4", 00:25:29.722 "uuid": "fafc7778-95be-54bd-adbb-38c72ed07e43", 00:25:29.722 "is_configured": true, 00:25:29.722 "data_offset": 2048, 00:25:29.722 "data_size": 63488 00:25:29.722 } 00:25:29.722 ] 00:25:29.722 }' 00:25:29.722 11:37:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.722 11:37:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:30.656 [2024-07-13 11:37:05.286116] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:30.656 [2024-07-13 11:37:05.286177] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:30.656 [2024-07-13 11:37:05.288749] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:30.656 [2024-07-13 11:37:05.288803] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.656 [2024-07-13 11:37:05.288849] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:30.656 [2024-07-13 11:37:05.288859] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:25:30.656 0 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 140879 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 140879 ']' 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 140879 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140879 00:25:30.656 killing process with pid 140879 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140879' 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 140879 00:25:30.656 11:37:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 140879 00:25:30.656 [2024-07-13 11:37:05.314994] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:30.915 [2024-07-13 11:37:05.541707] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.iIvtPu7hjh 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:32.291 ************************************ 00:25:32.291 END TEST raid_write_error_test 00:25:32.291 ************************************ 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:25:32.291 00:25:32.291 real 0m8.380s 00:25:32.291 user 0m12.930s 00:25:32.291 sys 0m0.984s 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:32.291 11:37:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.291 11:37:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:32.291 11:37:06 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:25:32.291 11:37:06 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:25:32.291 11:37:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:32.291 11:37:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:32.291 11:37:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:32.291 ************************************ 00:25:32.291 START TEST raid_state_function_test 00:25:32.291 ************************************ 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=141103 00:25:32.291 Process raid pid: 141103 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 141103' 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 141103 /var/tmp/spdk-raid.sock 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 141103 ']' 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:32.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:32.291 11:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:32.291 [2024-07-13 11:37:06.757051] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:32.291 [2024-07-13 11:37:06.757847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:32.291 [2024-07-13 11:37:06.929511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.550 [2024-07-13 11:37:07.124864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.809 [2024-07-13 11:37:07.313045] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:33.067 11:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.067 11:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:25:33.067 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:33.325 [2024-07-13 11:37:07.874326] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:33.325 [2024-07-13 11:37:07.874525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:33.325 [2024-07-13 11:37:07.874622] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:33.325 [2024-07-13 11:37:07.874744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:33.325 [2024-07-13 11:37:07.874829] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:33.325 [2024-07-13 11:37:07.874970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:33.325 [2024-07-13 11:37:07.875053] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:33.325 [2024-07-13 11:37:07.875169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.325 11:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.598 11:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.598 "name": "Existed_Raid", 00:25:33.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.598 "strip_size_kb": 0, 00:25:33.598 "state": "configuring", 00:25:33.598 "raid_level": "raid1", 00:25:33.598 "superblock": false, 00:25:33.598 "num_base_bdevs": 4, 00:25:33.598 "num_base_bdevs_discovered": 0, 00:25:33.598 "num_base_bdevs_operational": 4, 00:25:33.598 "base_bdevs_list": [ 00:25:33.598 { 00:25:33.598 "name": "BaseBdev1", 00:25:33.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.598 "is_configured": false, 00:25:33.598 "data_offset": 0, 00:25:33.598 "data_size": 0 00:25:33.598 }, 00:25:33.598 { 00:25:33.598 "name": "BaseBdev2", 00:25:33.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.598 "is_configured": false, 00:25:33.598 "data_offset": 0, 00:25:33.598 "data_size": 0 00:25:33.598 }, 00:25:33.598 { 00:25:33.598 "name": "BaseBdev3", 00:25:33.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.598 "is_configured": false, 00:25:33.598 "data_offset": 0, 00:25:33.598 "data_size": 0 00:25:33.598 }, 00:25:33.598 { 00:25:33.598 "name": "BaseBdev4", 00:25:33.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.599 "is_configured": false, 00:25:33.599 "data_offset": 0, 00:25:33.599 "data_size": 0 00:25:33.599 } 00:25:33.599 ] 00:25:33.599 }' 00:25:33.599 11:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.599 11:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.203 11:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:34.463 [2024-07-13 11:37:09.067208] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:34.463 [2024-07-13 11:37:09.067242] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:34.463 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:34.722 [2024-07-13 11:37:09.311264] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:34.722 [2024-07-13 11:37:09.311314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:34.722 [2024-07-13 11:37:09.311341] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:34.722 [2024-07-13 11:37:09.311377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:34.722 [2024-07-13 11:37:09.311386] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:34.722 [2024-07-13 11:37:09.311413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:34.722 [2024-07-13 11:37:09.311421] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:34.722 [2024-07-13 11:37:09.311440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:34.722 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:34.981 [2024-07-13 11:37:09.536851] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:34.981 BaseBdev1 00:25:34.981 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:34.981 11:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:34.981 11:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:34.981 11:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:34.981 11:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:34.981 11:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:34.981 11:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:35.239 [ 00:25:35.239 { 00:25:35.239 "name": "BaseBdev1", 00:25:35.239 "aliases": [ 00:25:35.239 "71bda55e-e61a-4be6-a132-b605b7ca107e" 00:25:35.239 ], 00:25:35.239 "product_name": "Malloc disk", 00:25:35.239 "block_size": 512, 00:25:35.239 "num_blocks": 65536, 00:25:35.239 "uuid": "71bda55e-e61a-4be6-a132-b605b7ca107e", 00:25:35.239 "assigned_rate_limits": { 00:25:35.239 "rw_ios_per_sec": 0, 00:25:35.239 "rw_mbytes_per_sec": 0, 00:25:35.239 "r_mbytes_per_sec": 0, 00:25:35.239 "w_mbytes_per_sec": 0 00:25:35.239 }, 00:25:35.239 "claimed": true, 00:25:35.239 "claim_type": "exclusive_write", 00:25:35.239 "zoned": false, 00:25:35.239 "supported_io_types": { 00:25:35.239 "read": true, 00:25:35.239 "write": true, 00:25:35.239 "unmap": true, 00:25:35.239 "flush": true, 00:25:35.239 "reset": true, 00:25:35.239 "nvme_admin": false, 00:25:35.239 "nvme_io": false, 00:25:35.239 "nvme_io_md": false, 00:25:35.239 "write_zeroes": true, 00:25:35.239 "zcopy": true, 00:25:35.239 "get_zone_info": false, 00:25:35.239 "zone_management": false, 00:25:35.239 "zone_append": false, 00:25:35.239 "compare": false, 00:25:35.239 "compare_and_write": false, 00:25:35.239 "abort": true, 00:25:35.239 "seek_hole": false, 00:25:35.239 "seek_data": false, 00:25:35.239 "copy": true, 00:25:35.239 "nvme_iov_md": false 00:25:35.239 }, 00:25:35.239 "memory_domains": [ 00:25:35.239 { 00:25:35.239 "dma_device_id": "system", 00:25:35.239 "dma_device_type": 1 00:25:35.239 }, 00:25:35.239 { 00:25:35.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.239 "dma_device_type": 2 00:25:35.239 } 00:25:35.239 ], 00:25:35.239 "driver_specific": {} 00:25:35.239 } 00:25:35.239 ] 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.239 11:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.498 11:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.498 "name": "Existed_Raid", 00:25:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.498 "strip_size_kb": 0, 00:25:35.498 "state": "configuring", 00:25:35.498 "raid_level": "raid1", 00:25:35.498 "superblock": false, 00:25:35.498 "num_base_bdevs": 4, 00:25:35.498 "num_base_bdevs_discovered": 1, 00:25:35.498 "num_base_bdevs_operational": 4, 00:25:35.498 "base_bdevs_list": [ 00:25:35.498 { 00:25:35.498 "name": "BaseBdev1", 00:25:35.498 "uuid": "71bda55e-e61a-4be6-a132-b605b7ca107e", 00:25:35.498 "is_configured": true, 00:25:35.498 "data_offset": 0, 00:25:35.498 "data_size": 65536 00:25:35.498 }, 00:25:35.498 { 00:25:35.498 "name": "BaseBdev2", 00:25:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.498 "is_configured": false, 00:25:35.498 "data_offset": 0, 00:25:35.498 "data_size": 0 00:25:35.498 }, 00:25:35.498 { 00:25:35.498 "name": "BaseBdev3", 00:25:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.498 "is_configured": false, 00:25:35.498 "data_offset": 0, 00:25:35.498 "data_size": 0 00:25:35.498 }, 00:25:35.498 { 00:25:35.498 "name": "BaseBdev4", 00:25:35.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.498 "is_configured": false, 00:25:35.498 "data_offset": 0, 00:25:35.498 "data_size": 0 00:25:35.498 } 00:25:35.498 ] 00:25:35.498 }' 00:25:35.498 11:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.498 11:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.065 11:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:36.323 [2024-07-13 11:37:10.965161] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:36.323 [2024-07-13 11:37:10.965210] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:25:36.323 11:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:36.582 [2024-07-13 11:37:11.221226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:36.582 [2024-07-13 11:37:11.223045] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:36.582 [2024-07-13 11:37:11.223100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:36.582 [2024-07-13 11:37:11.223111] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:36.582 [2024-07-13 11:37:11.223134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:36.582 [2024-07-13 11:37:11.223142] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:36.582 [2024-07-13 11:37:11.223166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.582 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:36.841 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:36.841 "name": "Existed_Raid", 00:25:36.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.841 "strip_size_kb": 0, 00:25:36.841 "state": "configuring", 00:25:36.841 "raid_level": "raid1", 00:25:36.841 "superblock": false, 00:25:36.841 "num_base_bdevs": 4, 00:25:36.841 "num_base_bdevs_discovered": 1, 00:25:36.841 "num_base_bdevs_operational": 4, 00:25:36.841 "base_bdevs_list": [ 00:25:36.841 { 00:25:36.841 "name": "BaseBdev1", 00:25:36.841 "uuid": "71bda55e-e61a-4be6-a132-b605b7ca107e", 00:25:36.841 "is_configured": true, 00:25:36.841 "data_offset": 0, 00:25:36.841 "data_size": 65536 00:25:36.841 }, 00:25:36.841 { 00:25:36.841 "name": "BaseBdev2", 00:25:36.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.841 "is_configured": false, 00:25:36.841 "data_offset": 0, 00:25:36.841 "data_size": 0 00:25:36.841 }, 00:25:36.841 { 00:25:36.841 "name": "BaseBdev3", 00:25:36.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.841 "is_configured": false, 00:25:36.841 "data_offset": 0, 00:25:36.841 "data_size": 0 00:25:36.841 }, 00:25:36.841 { 00:25:36.841 "name": "BaseBdev4", 00:25:36.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:36.841 "is_configured": false, 00:25:36.841 "data_offset": 0, 00:25:36.841 "data_size": 0 00:25:36.841 } 00:25:36.841 ] 00:25:36.841 }' 00:25:36.841 11:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:36.841 11:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:37.408 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:37.666 [2024-07-13 11:37:12.347269] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:37.666 BaseBdev2 00:25:37.666 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:37.666 11:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:37.666 11:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:37.666 11:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:37.666 11:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:37.666 11:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:37.666 11:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:37.925 11:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:38.184 [ 00:25:38.184 { 00:25:38.184 "name": "BaseBdev2", 00:25:38.184 "aliases": [ 00:25:38.184 "7389a74f-0dc1-41d5-85a8-f333654280dc" 00:25:38.184 ], 00:25:38.184 "product_name": "Malloc disk", 00:25:38.184 "block_size": 512, 00:25:38.184 "num_blocks": 65536, 00:25:38.184 "uuid": "7389a74f-0dc1-41d5-85a8-f333654280dc", 00:25:38.184 "assigned_rate_limits": { 00:25:38.184 "rw_ios_per_sec": 0, 00:25:38.184 "rw_mbytes_per_sec": 0, 00:25:38.184 "r_mbytes_per_sec": 0, 00:25:38.184 "w_mbytes_per_sec": 0 00:25:38.184 }, 00:25:38.184 "claimed": true, 00:25:38.184 "claim_type": "exclusive_write", 00:25:38.184 "zoned": false, 00:25:38.184 "supported_io_types": { 00:25:38.184 "read": true, 00:25:38.184 "write": true, 00:25:38.184 "unmap": true, 00:25:38.184 "flush": true, 00:25:38.184 "reset": true, 00:25:38.184 "nvme_admin": false, 00:25:38.184 "nvme_io": false, 00:25:38.184 "nvme_io_md": false, 00:25:38.184 "write_zeroes": true, 00:25:38.184 "zcopy": true, 00:25:38.184 "get_zone_info": false, 00:25:38.184 "zone_management": false, 00:25:38.184 "zone_append": false, 00:25:38.184 "compare": false, 00:25:38.184 "compare_and_write": false, 00:25:38.184 "abort": true, 00:25:38.184 "seek_hole": false, 00:25:38.184 "seek_data": false, 00:25:38.184 "copy": true, 00:25:38.184 "nvme_iov_md": false 00:25:38.184 }, 00:25:38.184 "memory_domains": [ 00:25:38.184 { 00:25:38.184 "dma_device_id": "system", 00:25:38.184 "dma_device_type": 1 00:25:38.184 }, 00:25:38.184 { 00:25:38.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.184 "dma_device_type": 2 00:25:38.184 } 00:25:38.184 ], 00:25:38.184 "driver_specific": {} 00:25:38.184 } 00:25:38.184 ] 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.184 11:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.442 11:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:38.442 "name": "Existed_Raid", 00:25:38.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.442 "strip_size_kb": 0, 00:25:38.442 "state": "configuring", 00:25:38.442 "raid_level": "raid1", 00:25:38.442 "superblock": false, 00:25:38.442 "num_base_bdevs": 4, 00:25:38.442 "num_base_bdevs_discovered": 2, 00:25:38.442 "num_base_bdevs_operational": 4, 00:25:38.442 "base_bdevs_list": [ 00:25:38.442 { 00:25:38.442 "name": "BaseBdev1", 00:25:38.442 "uuid": "71bda55e-e61a-4be6-a132-b605b7ca107e", 00:25:38.442 "is_configured": true, 00:25:38.442 "data_offset": 0, 00:25:38.442 "data_size": 65536 00:25:38.442 }, 00:25:38.442 { 00:25:38.442 "name": "BaseBdev2", 00:25:38.442 "uuid": "7389a74f-0dc1-41d5-85a8-f333654280dc", 00:25:38.442 "is_configured": true, 00:25:38.442 "data_offset": 0, 00:25:38.442 "data_size": 65536 00:25:38.442 }, 00:25:38.442 { 00:25:38.442 "name": "BaseBdev3", 00:25:38.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.442 "is_configured": false, 00:25:38.442 "data_offset": 0, 00:25:38.442 "data_size": 0 00:25:38.442 }, 00:25:38.442 { 00:25:38.442 "name": "BaseBdev4", 00:25:38.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.442 "is_configured": false, 00:25:38.442 "data_offset": 0, 00:25:38.442 "data_size": 0 00:25:38.442 } 00:25:38.442 ] 00:25:38.442 }' 00:25:38.442 11:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:38.442 11:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.376 11:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:39.376 [2024-07-13 11:37:13.994712] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:39.376 BaseBdev3 00:25:39.376 11:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:39.376 11:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:39.376 11:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:39.376 11:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:39.376 11:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:39.376 11:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:39.376 11:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:39.635 11:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:39.894 [ 00:25:39.894 { 00:25:39.894 "name": "BaseBdev3", 00:25:39.894 "aliases": [ 00:25:39.894 "cfeca2b0-7f4c-49ce-8e84-9460c5a93f5b" 00:25:39.894 ], 00:25:39.894 "product_name": "Malloc disk", 00:25:39.894 "block_size": 512, 00:25:39.894 "num_blocks": 65536, 00:25:39.894 "uuid": "cfeca2b0-7f4c-49ce-8e84-9460c5a93f5b", 00:25:39.894 "assigned_rate_limits": { 00:25:39.894 "rw_ios_per_sec": 0, 00:25:39.894 "rw_mbytes_per_sec": 0, 00:25:39.894 "r_mbytes_per_sec": 0, 00:25:39.894 "w_mbytes_per_sec": 0 00:25:39.894 }, 00:25:39.894 "claimed": true, 00:25:39.894 "claim_type": "exclusive_write", 00:25:39.894 "zoned": false, 00:25:39.894 "supported_io_types": { 00:25:39.894 "read": true, 00:25:39.894 "write": true, 00:25:39.894 "unmap": true, 00:25:39.894 "flush": true, 00:25:39.894 "reset": true, 00:25:39.894 "nvme_admin": false, 00:25:39.894 "nvme_io": false, 00:25:39.894 "nvme_io_md": false, 00:25:39.894 "write_zeroes": true, 00:25:39.894 "zcopy": true, 00:25:39.894 "get_zone_info": false, 00:25:39.894 "zone_management": false, 00:25:39.894 "zone_append": false, 00:25:39.894 "compare": false, 00:25:39.894 "compare_and_write": false, 00:25:39.894 "abort": true, 00:25:39.894 "seek_hole": false, 00:25:39.894 "seek_data": false, 00:25:39.894 "copy": true, 00:25:39.894 "nvme_iov_md": false 00:25:39.894 }, 00:25:39.894 "memory_domains": [ 00:25:39.894 { 00:25:39.894 "dma_device_id": "system", 00:25:39.894 "dma_device_type": 1 00:25:39.894 }, 00:25:39.894 { 00:25:39.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.894 "dma_device_type": 2 00:25:39.894 } 00:25:39.894 ], 00:25:39.894 "driver_specific": {} 00:25:39.894 } 00:25:39.894 ] 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.894 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:40.151 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:40.151 "name": "Existed_Raid", 00:25:40.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.151 "strip_size_kb": 0, 00:25:40.151 "state": "configuring", 00:25:40.151 "raid_level": "raid1", 00:25:40.151 "superblock": false, 00:25:40.151 "num_base_bdevs": 4, 00:25:40.151 "num_base_bdevs_discovered": 3, 00:25:40.151 "num_base_bdevs_operational": 4, 00:25:40.151 "base_bdevs_list": [ 00:25:40.151 { 00:25:40.151 "name": "BaseBdev1", 00:25:40.151 "uuid": "71bda55e-e61a-4be6-a132-b605b7ca107e", 00:25:40.151 "is_configured": true, 00:25:40.151 "data_offset": 0, 00:25:40.151 "data_size": 65536 00:25:40.151 }, 00:25:40.151 { 00:25:40.151 "name": "BaseBdev2", 00:25:40.151 "uuid": "7389a74f-0dc1-41d5-85a8-f333654280dc", 00:25:40.151 "is_configured": true, 00:25:40.151 "data_offset": 0, 00:25:40.151 "data_size": 65536 00:25:40.151 }, 00:25:40.151 { 00:25:40.151 "name": "BaseBdev3", 00:25:40.151 "uuid": "cfeca2b0-7f4c-49ce-8e84-9460c5a93f5b", 00:25:40.151 "is_configured": true, 00:25:40.151 "data_offset": 0, 00:25:40.151 "data_size": 65536 00:25:40.151 }, 00:25:40.151 { 00:25:40.151 "name": "BaseBdev4", 00:25:40.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:40.151 "is_configured": false, 00:25:40.151 "data_offset": 0, 00:25:40.151 "data_size": 0 00:25:40.151 } 00:25:40.151 ] 00:25:40.151 }' 00:25:40.151 11:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:40.151 11:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:40.716 11:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:40.973 [2024-07-13 11:37:15.681380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:40.973 [2024-07-13 11:37:15.681465] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:25:40.974 [2024-07-13 11:37:15.681502] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:40.974 [2024-07-13 11:37:15.681683] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:25:40.974 [2024-07-13 11:37:15.682054] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:25:40.974 [2024-07-13 11:37:15.682081] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:25:40.974 BaseBdev4 00:25:40.974 [2024-07-13 11:37:15.682343] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:40.974 11:37:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:40.974 11:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:40.974 11:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:40.974 11:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:40.974 11:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:40.974 11:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:40.974 11:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:41.232 11:37:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:41.522 [ 00:25:41.522 { 00:25:41.522 "name": "BaseBdev4", 00:25:41.522 "aliases": [ 00:25:41.522 "d4ae8181-37bd-4300-9242-c5cb1f2a0758" 00:25:41.522 ], 00:25:41.523 "product_name": "Malloc disk", 00:25:41.523 "block_size": 512, 00:25:41.523 "num_blocks": 65536, 00:25:41.523 "uuid": "d4ae8181-37bd-4300-9242-c5cb1f2a0758", 00:25:41.523 "assigned_rate_limits": { 00:25:41.523 "rw_ios_per_sec": 0, 00:25:41.523 "rw_mbytes_per_sec": 0, 00:25:41.523 "r_mbytes_per_sec": 0, 00:25:41.523 "w_mbytes_per_sec": 0 00:25:41.523 }, 00:25:41.523 "claimed": true, 00:25:41.523 "claim_type": "exclusive_write", 00:25:41.523 "zoned": false, 00:25:41.523 "supported_io_types": { 00:25:41.523 "read": true, 00:25:41.523 "write": true, 00:25:41.523 "unmap": true, 00:25:41.523 "flush": true, 00:25:41.523 "reset": true, 00:25:41.523 "nvme_admin": false, 00:25:41.523 "nvme_io": false, 00:25:41.523 "nvme_io_md": false, 00:25:41.523 "write_zeroes": true, 00:25:41.523 "zcopy": true, 00:25:41.523 "get_zone_info": false, 00:25:41.523 "zone_management": false, 00:25:41.523 "zone_append": false, 00:25:41.523 "compare": false, 00:25:41.523 "compare_and_write": false, 00:25:41.523 "abort": true, 00:25:41.523 "seek_hole": false, 00:25:41.523 "seek_data": false, 00:25:41.523 "copy": true, 00:25:41.523 "nvme_iov_md": false 00:25:41.523 }, 00:25:41.523 "memory_domains": [ 00:25:41.523 { 00:25:41.523 "dma_device_id": "system", 00:25:41.523 "dma_device_type": 1 00:25:41.523 }, 00:25:41.523 { 00:25:41.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.523 "dma_device_type": 2 00:25:41.523 } 00:25:41.523 ], 00:25:41.523 "driver_specific": {} 00:25:41.523 } 00:25:41.523 ] 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.523 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.780 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:41.780 "name": "Existed_Raid", 00:25:41.780 "uuid": "1e444afe-ba07-444b-b627-654e14f54b13", 00:25:41.780 "strip_size_kb": 0, 00:25:41.780 "state": "online", 00:25:41.780 "raid_level": "raid1", 00:25:41.780 "superblock": false, 00:25:41.780 "num_base_bdevs": 4, 00:25:41.780 "num_base_bdevs_discovered": 4, 00:25:41.781 "num_base_bdevs_operational": 4, 00:25:41.781 "base_bdevs_list": [ 00:25:41.781 { 00:25:41.781 "name": "BaseBdev1", 00:25:41.781 "uuid": "71bda55e-e61a-4be6-a132-b605b7ca107e", 00:25:41.781 "is_configured": true, 00:25:41.781 "data_offset": 0, 00:25:41.781 "data_size": 65536 00:25:41.781 }, 00:25:41.781 { 00:25:41.781 "name": "BaseBdev2", 00:25:41.781 "uuid": "7389a74f-0dc1-41d5-85a8-f333654280dc", 00:25:41.781 "is_configured": true, 00:25:41.781 "data_offset": 0, 00:25:41.781 "data_size": 65536 00:25:41.781 }, 00:25:41.781 { 00:25:41.781 "name": "BaseBdev3", 00:25:41.781 "uuid": "cfeca2b0-7f4c-49ce-8e84-9460c5a93f5b", 00:25:41.781 "is_configured": true, 00:25:41.781 "data_offset": 0, 00:25:41.781 "data_size": 65536 00:25:41.781 }, 00:25:41.781 { 00:25:41.781 "name": "BaseBdev4", 00:25:41.781 "uuid": "d4ae8181-37bd-4300-9242-c5cb1f2a0758", 00:25:41.781 "is_configured": true, 00:25:41.781 "data_offset": 0, 00:25:41.781 "data_size": 65536 00:25:41.781 } 00:25:41.781 ] 00:25:41.781 }' 00:25:41.781 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:41.781 11:37:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.347 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:42.347 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:42.347 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:42.347 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:42.347 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:42.347 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:42.347 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:42.347 11:37:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:42.604 [2024-07-13 11:37:17.226013] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:42.604 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:42.604 "name": "Existed_Raid", 00:25:42.604 "aliases": [ 00:25:42.604 "1e444afe-ba07-444b-b627-654e14f54b13" 00:25:42.604 ], 00:25:42.604 "product_name": "Raid Volume", 00:25:42.604 "block_size": 512, 00:25:42.604 "num_blocks": 65536, 00:25:42.604 "uuid": "1e444afe-ba07-444b-b627-654e14f54b13", 00:25:42.604 "assigned_rate_limits": { 00:25:42.604 "rw_ios_per_sec": 0, 00:25:42.604 "rw_mbytes_per_sec": 0, 00:25:42.604 "r_mbytes_per_sec": 0, 00:25:42.604 "w_mbytes_per_sec": 0 00:25:42.604 }, 00:25:42.604 "claimed": false, 00:25:42.604 "zoned": false, 00:25:42.604 "supported_io_types": { 00:25:42.604 "read": true, 00:25:42.604 "write": true, 00:25:42.604 "unmap": false, 00:25:42.604 "flush": false, 00:25:42.604 "reset": true, 00:25:42.604 "nvme_admin": false, 00:25:42.604 "nvme_io": false, 00:25:42.604 "nvme_io_md": false, 00:25:42.604 "write_zeroes": true, 00:25:42.604 "zcopy": false, 00:25:42.604 "get_zone_info": false, 00:25:42.604 "zone_management": false, 00:25:42.604 "zone_append": false, 00:25:42.604 "compare": false, 00:25:42.604 "compare_and_write": false, 00:25:42.604 "abort": false, 00:25:42.604 "seek_hole": false, 00:25:42.604 "seek_data": false, 00:25:42.604 "copy": false, 00:25:42.604 "nvme_iov_md": false 00:25:42.604 }, 00:25:42.604 "memory_domains": [ 00:25:42.604 { 00:25:42.604 "dma_device_id": "system", 00:25:42.604 "dma_device_type": 1 00:25:42.604 }, 00:25:42.604 { 00:25:42.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.604 "dma_device_type": 2 00:25:42.604 }, 00:25:42.604 { 00:25:42.604 "dma_device_id": "system", 00:25:42.604 "dma_device_type": 1 00:25:42.604 }, 00:25:42.604 { 00:25:42.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.604 "dma_device_type": 2 00:25:42.604 }, 00:25:42.604 { 00:25:42.604 "dma_device_id": "system", 00:25:42.604 "dma_device_type": 1 00:25:42.604 }, 00:25:42.604 { 00:25:42.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.604 "dma_device_type": 2 00:25:42.604 }, 00:25:42.604 { 00:25:42.604 "dma_device_id": "system", 00:25:42.604 "dma_device_type": 1 00:25:42.604 }, 00:25:42.604 { 00:25:42.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.604 "dma_device_type": 2 00:25:42.605 } 00:25:42.605 ], 00:25:42.605 "driver_specific": { 00:25:42.605 "raid": { 00:25:42.605 "uuid": "1e444afe-ba07-444b-b627-654e14f54b13", 00:25:42.605 "strip_size_kb": 0, 00:25:42.605 "state": "online", 00:25:42.605 "raid_level": "raid1", 00:25:42.605 "superblock": false, 00:25:42.605 "num_base_bdevs": 4, 00:25:42.605 "num_base_bdevs_discovered": 4, 00:25:42.605 "num_base_bdevs_operational": 4, 00:25:42.605 "base_bdevs_list": [ 00:25:42.605 { 00:25:42.605 "name": "BaseBdev1", 00:25:42.605 "uuid": "71bda55e-e61a-4be6-a132-b605b7ca107e", 00:25:42.605 "is_configured": true, 00:25:42.605 "data_offset": 0, 00:25:42.605 "data_size": 65536 00:25:42.605 }, 00:25:42.605 { 00:25:42.605 "name": "BaseBdev2", 00:25:42.605 "uuid": "7389a74f-0dc1-41d5-85a8-f333654280dc", 00:25:42.605 "is_configured": true, 00:25:42.605 "data_offset": 0, 00:25:42.605 "data_size": 65536 00:25:42.605 }, 00:25:42.605 { 00:25:42.605 "name": "BaseBdev3", 00:25:42.605 "uuid": "cfeca2b0-7f4c-49ce-8e84-9460c5a93f5b", 00:25:42.605 "is_configured": true, 00:25:42.605 "data_offset": 0, 00:25:42.605 "data_size": 65536 00:25:42.605 }, 00:25:42.605 { 00:25:42.605 "name": "BaseBdev4", 00:25:42.605 "uuid": "d4ae8181-37bd-4300-9242-c5cb1f2a0758", 00:25:42.605 "is_configured": true, 00:25:42.605 "data_offset": 0, 00:25:42.605 "data_size": 65536 00:25:42.605 } 00:25:42.605 ] 00:25:42.605 } 00:25:42.605 } 00:25:42.605 }' 00:25:42.605 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:42.605 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:42.605 BaseBdev2 00:25:42.605 BaseBdev3 00:25:42.605 BaseBdev4' 00:25:42.605 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:42.605 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:42.605 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:42.861 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:42.861 "name": "BaseBdev1", 00:25:42.861 "aliases": [ 00:25:42.861 "71bda55e-e61a-4be6-a132-b605b7ca107e" 00:25:42.861 ], 00:25:42.861 "product_name": "Malloc disk", 00:25:42.861 "block_size": 512, 00:25:42.861 "num_blocks": 65536, 00:25:42.861 "uuid": "71bda55e-e61a-4be6-a132-b605b7ca107e", 00:25:42.861 "assigned_rate_limits": { 00:25:42.861 "rw_ios_per_sec": 0, 00:25:42.861 "rw_mbytes_per_sec": 0, 00:25:42.861 "r_mbytes_per_sec": 0, 00:25:42.861 "w_mbytes_per_sec": 0 00:25:42.861 }, 00:25:42.861 "claimed": true, 00:25:42.861 "claim_type": "exclusive_write", 00:25:42.861 "zoned": false, 00:25:42.861 "supported_io_types": { 00:25:42.861 "read": true, 00:25:42.861 "write": true, 00:25:42.861 "unmap": true, 00:25:42.861 "flush": true, 00:25:42.861 "reset": true, 00:25:42.861 "nvme_admin": false, 00:25:42.861 "nvme_io": false, 00:25:42.861 "nvme_io_md": false, 00:25:42.861 "write_zeroes": true, 00:25:42.861 "zcopy": true, 00:25:42.861 "get_zone_info": false, 00:25:42.861 "zone_management": false, 00:25:42.861 "zone_append": false, 00:25:42.861 "compare": false, 00:25:42.861 "compare_and_write": false, 00:25:42.861 "abort": true, 00:25:42.861 "seek_hole": false, 00:25:42.861 "seek_data": false, 00:25:42.861 "copy": true, 00:25:42.861 "nvme_iov_md": false 00:25:42.861 }, 00:25:42.861 "memory_domains": [ 00:25:42.861 { 00:25:42.861 "dma_device_id": "system", 00:25:42.861 "dma_device_type": 1 00:25:42.861 }, 00:25:42.861 { 00:25:42.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:42.861 "dma_device_type": 2 00:25:42.861 } 00:25:42.861 ], 00:25:42.861 "driver_specific": {} 00:25:42.861 }' 00:25:42.861 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:42.861 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:42.861 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:42.861 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.118 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.118 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:43.118 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.118 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.118 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:43.118 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.376 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.376 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:43.376 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:43.376 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:43.376 11:37:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:43.634 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:43.634 "name": "BaseBdev2", 00:25:43.634 "aliases": [ 00:25:43.634 "7389a74f-0dc1-41d5-85a8-f333654280dc" 00:25:43.634 ], 00:25:43.634 "product_name": "Malloc disk", 00:25:43.634 "block_size": 512, 00:25:43.634 "num_blocks": 65536, 00:25:43.634 "uuid": "7389a74f-0dc1-41d5-85a8-f333654280dc", 00:25:43.634 "assigned_rate_limits": { 00:25:43.634 "rw_ios_per_sec": 0, 00:25:43.634 "rw_mbytes_per_sec": 0, 00:25:43.634 "r_mbytes_per_sec": 0, 00:25:43.634 "w_mbytes_per_sec": 0 00:25:43.634 }, 00:25:43.634 "claimed": true, 00:25:43.634 "claim_type": "exclusive_write", 00:25:43.634 "zoned": false, 00:25:43.634 "supported_io_types": { 00:25:43.634 "read": true, 00:25:43.634 "write": true, 00:25:43.634 "unmap": true, 00:25:43.634 "flush": true, 00:25:43.634 "reset": true, 00:25:43.634 "nvme_admin": false, 00:25:43.634 "nvme_io": false, 00:25:43.634 "nvme_io_md": false, 00:25:43.634 "write_zeroes": true, 00:25:43.634 "zcopy": true, 00:25:43.634 "get_zone_info": false, 00:25:43.634 "zone_management": false, 00:25:43.634 "zone_append": false, 00:25:43.634 "compare": false, 00:25:43.634 "compare_and_write": false, 00:25:43.634 "abort": true, 00:25:43.634 "seek_hole": false, 00:25:43.634 "seek_data": false, 00:25:43.634 "copy": true, 00:25:43.634 "nvme_iov_md": false 00:25:43.634 }, 00:25:43.634 "memory_domains": [ 00:25:43.634 { 00:25:43.634 "dma_device_id": "system", 00:25:43.634 "dma_device_type": 1 00:25:43.634 }, 00:25:43.634 { 00:25:43.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:43.634 "dma_device_type": 2 00:25:43.634 } 00:25:43.634 ], 00:25:43.634 "driver_specific": {} 00:25:43.634 }' 00:25:43.634 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.634 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:43.634 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:43.634 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.634 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:43.892 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:43.892 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.892 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:43.892 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:43.892 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.892 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:43.892 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:43.893 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:43.893 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:43.893 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:44.459 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.459 "name": "BaseBdev3", 00:25:44.459 "aliases": [ 00:25:44.459 "cfeca2b0-7f4c-49ce-8e84-9460c5a93f5b" 00:25:44.459 ], 00:25:44.459 "product_name": "Malloc disk", 00:25:44.459 "block_size": 512, 00:25:44.459 "num_blocks": 65536, 00:25:44.459 "uuid": "cfeca2b0-7f4c-49ce-8e84-9460c5a93f5b", 00:25:44.459 "assigned_rate_limits": { 00:25:44.459 "rw_ios_per_sec": 0, 00:25:44.459 "rw_mbytes_per_sec": 0, 00:25:44.459 "r_mbytes_per_sec": 0, 00:25:44.459 "w_mbytes_per_sec": 0 00:25:44.459 }, 00:25:44.459 "claimed": true, 00:25:44.459 "claim_type": "exclusive_write", 00:25:44.459 "zoned": false, 00:25:44.459 "supported_io_types": { 00:25:44.459 "read": true, 00:25:44.459 "write": true, 00:25:44.459 "unmap": true, 00:25:44.459 "flush": true, 00:25:44.459 "reset": true, 00:25:44.459 "nvme_admin": false, 00:25:44.459 "nvme_io": false, 00:25:44.459 "nvme_io_md": false, 00:25:44.459 "write_zeroes": true, 00:25:44.459 "zcopy": true, 00:25:44.459 "get_zone_info": false, 00:25:44.459 "zone_management": false, 00:25:44.459 "zone_append": false, 00:25:44.459 "compare": false, 00:25:44.459 "compare_and_write": false, 00:25:44.459 "abort": true, 00:25:44.459 "seek_hole": false, 00:25:44.459 "seek_data": false, 00:25:44.459 "copy": true, 00:25:44.459 "nvme_iov_md": false 00:25:44.459 }, 00:25:44.459 "memory_domains": [ 00:25:44.459 { 00:25:44.459 "dma_device_id": "system", 00:25:44.459 "dma_device_type": 1 00:25:44.459 }, 00:25:44.459 { 00:25:44.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.459 "dma_device_type": 2 00:25:44.459 } 00:25:44.459 ], 00:25:44.459 "driver_specific": {} 00:25:44.459 }' 00:25:44.459 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.459 11:37:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.459 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:44.459 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.459 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:44.459 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:44.459 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.459 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:44.717 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:44.717 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.717 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:44.717 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:44.717 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:44.718 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:44.718 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:44.976 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:44.976 "name": "BaseBdev4", 00:25:44.976 "aliases": [ 00:25:44.976 "d4ae8181-37bd-4300-9242-c5cb1f2a0758" 00:25:44.976 ], 00:25:44.976 "product_name": "Malloc disk", 00:25:44.976 "block_size": 512, 00:25:44.976 "num_blocks": 65536, 00:25:44.976 "uuid": "d4ae8181-37bd-4300-9242-c5cb1f2a0758", 00:25:44.976 "assigned_rate_limits": { 00:25:44.976 "rw_ios_per_sec": 0, 00:25:44.976 "rw_mbytes_per_sec": 0, 00:25:44.976 "r_mbytes_per_sec": 0, 00:25:44.976 "w_mbytes_per_sec": 0 00:25:44.976 }, 00:25:44.976 "claimed": true, 00:25:44.976 "claim_type": "exclusive_write", 00:25:44.976 "zoned": false, 00:25:44.976 "supported_io_types": { 00:25:44.976 "read": true, 00:25:44.976 "write": true, 00:25:44.976 "unmap": true, 00:25:44.976 "flush": true, 00:25:44.976 "reset": true, 00:25:44.976 "nvme_admin": false, 00:25:44.976 "nvme_io": false, 00:25:44.976 "nvme_io_md": false, 00:25:44.976 "write_zeroes": true, 00:25:44.976 "zcopy": true, 00:25:44.976 "get_zone_info": false, 00:25:44.976 "zone_management": false, 00:25:44.976 "zone_append": false, 00:25:44.976 "compare": false, 00:25:44.976 "compare_and_write": false, 00:25:44.976 "abort": true, 00:25:44.976 "seek_hole": false, 00:25:44.976 "seek_data": false, 00:25:44.976 "copy": true, 00:25:44.976 "nvme_iov_md": false 00:25:44.976 }, 00:25:44.976 "memory_domains": [ 00:25:44.976 { 00:25:44.976 "dma_device_id": "system", 00:25:44.976 "dma_device_type": 1 00:25:44.976 }, 00:25:44.976 { 00:25:44.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:44.976 "dma_device_type": 2 00:25:44.976 } 00:25:44.976 ], 00:25:44.976 "driver_specific": {} 00:25:44.976 }' 00:25:44.976 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:44.976 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:45.235 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:45.235 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.235 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:45.235 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:45.235 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.235 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:45.235 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:45.235 11:37:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.494 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:45.494 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:45.494 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:45.753 [2024-07-13 11:37:20.310589] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.753 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.012 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.012 "name": "Existed_Raid", 00:25:46.012 "uuid": "1e444afe-ba07-444b-b627-654e14f54b13", 00:25:46.012 "strip_size_kb": 0, 00:25:46.012 "state": "online", 00:25:46.012 "raid_level": "raid1", 00:25:46.012 "superblock": false, 00:25:46.012 "num_base_bdevs": 4, 00:25:46.012 "num_base_bdevs_discovered": 3, 00:25:46.012 "num_base_bdevs_operational": 3, 00:25:46.012 "base_bdevs_list": [ 00:25:46.012 { 00:25:46.012 "name": null, 00:25:46.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.012 "is_configured": false, 00:25:46.012 "data_offset": 0, 00:25:46.012 "data_size": 65536 00:25:46.012 }, 00:25:46.012 { 00:25:46.012 "name": "BaseBdev2", 00:25:46.012 "uuid": "7389a74f-0dc1-41d5-85a8-f333654280dc", 00:25:46.012 "is_configured": true, 00:25:46.012 "data_offset": 0, 00:25:46.012 "data_size": 65536 00:25:46.012 }, 00:25:46.012 { 00:25:46.012 "name": "BaseBdev3", 00:25:46.012 "uuid": "cfeca2b0-7f4c-49ce-8e84-9460c5a93f5b", 00:25:46.012 "is_configured": true, 00:25:46.012 "data_offset": 0, 00:25:46.012 "data_size": 65536 00:25:46.012 }, 00:25:46.012 { 00:25:46.012 "name": "BaseBdev4", 00:25:46.012 "uuid": "d4ae8181-37bd-4300-9242-c5cb1f2a0758", 00:25:46.012 "is_configured": true, 00:25:46.012 "data_offset": 0, 00:25:46.012 "data_size": 65536 00:25:46.012 } 00:25:46.012 ] 00:25:46.012 }' 00:25:46.012 11:37:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.012 11:37:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.579 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:46.579 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:46.579 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.579 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:46.838 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:46.838 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:46.838 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:47.097 [2024-07-13 11:37:21.783642] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:47.356 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:47.356 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:47.356 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.356 11:37:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:47.614 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:47.614 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:47.614 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:47.614 [2024-07-13 11:37:22.303579] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:47.873 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:47.873 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:47.873 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.873 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:47.873 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:47.873 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:47.873 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:48.131 [2024-07-13 11:37:22.823758] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:48.131 [2024-07-13 11:37:22.823879] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:48.388 [2024-07-13 11:37:22.895376] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:48.389 [2024-07-13 11:37:22.895443] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:48.389 [2024-07-13 11:37:22.895454] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:25:48.389 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:48.389 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:48.389 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.389 11:37:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:48.389 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:48.389 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:48.389 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:48.389 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:48.389 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:48.389 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:48.646 BaseBdev2 00:25:48.646 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:48.646 11:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:48.646 11:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:48.646 11:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:48.646 11:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:48.646 11:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:48.646 11:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:48.903 11:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:49.163 [ 00:25:49.163 { 00:25:49.163 "name": "BaseBdev2", 00:25:49.163 "aliases": [ 00:25:49.163 "50682e54-f839-4b7e-89b0-3f1ab0bcc27f" 00:25:49.163 ], 00:25:49.163 "product_name": "Malloc disk", 00:25:49.163 "block_size": 512, 00:25:49.163 "num_blocks": 65536, 00:25:49.163 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:25:49.163 "assigned_rate_limits": { 00:25:49.163 "rw_ios_per_sec": 0, 00:25:49.163 "rw_mbytes_per_sec": 0, 00:25:49.163 "r_mbytes_per_sec": 0, 00:25:49.163 "w_mbytes_per_sec": 0 00:25:49.163 }, 00:25:49.163 "claimed": false, 00:25:49.163 "zoned": false, 00:25:49.163 "supported_io_types": { 00:25:49.163 "read": true, 00:25:49.163 "write": true, 00:25:49.163 "unmap": true, 00:25:49.163 "flush": true, 00:25:49.163 "reset": true, 00:25:49.163 "nvme_admin": false, 00:25:49.163 "nvme_io": false, 00:25:49.163 "nvme_io_md": false, 00:25:49.163 "write_zeroes": true, 00:25:49.163 "zcopy": true, 00:25:49.163 "get_zone_info": false, 00:25:49.163 "zone_management": false, 00:25:49.163 "zone_append": false, 00:25:49.163 "compare": false, 00:25:49.163 "compare_and_write": false, 00:25:49.163 "abort": true, 00:25:49.163 "seek_hole": false, 00:25:49.163 "seek_data": false, 00:25:49.163 "copy": true, 00:25:49.163 "nvme_iov_md": false 00:25:49.163 }, 00:25:49.163 "memory_domains": [ 00:25:49.163 { 00:25:49.163 "dma_device_id": "system", 00:25:49.163 "dma_device_type": 1 00:25:49.163 }, 00:25:49.163 { 00:25:49.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.163 "dma_device_type": 2 00:25:49.163 } 00:25:49.163 ], 00:25:49.163 "driver_specific": {} 00:25:49.163 } 00:25:49.163 ] 00:25:49.163 11:37:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:49.163 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:49.163 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:49.163 11:37:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:49.421 BaseBdev3 00:25:49.421 11:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:49.421 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:49.421 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:49.421 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:49.421 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:49.421 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:49.421 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:49.679 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:49.938 [ 00:25:49.938 { 00:25:49.938 "name": "BaseBdev3", 00:25:49.938 "aliases": [ 00:25:49.938 "4d40e842-7d60-46bb-a31b-629a957a0a1b" 00:25:49.938 ], 00:25:49.938 "product_name": "Malloc disk", 00:25:49.938 "block_size": 512, 00:25:49.938 "num_blocks": 65536, 00:25:49.938 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:25:49.938 "assigned_rate_limits": { 00:25:49.938 "rw_ios_per_sec": 0, 00:25:49.938 "rw_mbytes_per_sec": 0, 00:25:49.938 "r_mbytes_per_sec": 0, 00:25:49.938 "w_mbytes_per_sec": 0 00:25:49.938 }, 00:25:49.938 "claimed": false, 00:25:49.938 "zoned": false, 00:25:49.938 "supported_io_types": { 00:25:49.938 "read": true, 00:25:49.938 "write": true, 00:25:49.938 "unmap": true, 00:25:49.938 "flush": true, 00:25:49.938 "reset": true, 00:25:49.938 "nvme_admin": false, 00:25:49.938 "nvme_io": false, 00:25:49.938 "nvme_io_md": false, 00:25:49.938 "write_zeroes": true, 00:25:49.938 "zcopy": true, 00:25:49.938 "get_zone_info": false, 00:25:49.938 "zone_management": false, 00:25:49.938 "zone_append": false, 00:25:49.938 "compare": false, 00:25:49.938 "compare_and_write": false, 00:25:49.938 "abort": true, 00:25:49.938 "seek_hole": false, 00:25:49.938 "seek_data": false, 00:25:49.938 "copy": true, 00:25:49.938 "nvme_iov_md": false 00:25:49.938 }, 00:25:49.938 "memory_domains": [ 00:25:49.938 { 00:25:49.938 "dma_device_id": "system", 00:25:49.938 "dma_device_type": 1 00:25:49.938 }, 00:25:49.938 { 00:25:49.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.938 "dma_device_type": 2 00:25:49.938 } 00:25:49.938 ], 00:25:49.938 "driver_specific": {} 00:25:49.938 } 00:25:49.938 ] 00:25:49.938 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:49.938 11:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:49.938 11:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:49.938 11:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:50.197 BaseBdev4 00:25:50.197 11:37:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:50.197 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:50.197 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:50.197 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:50.197 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:50.197 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:50.197 11:37:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:50.456 11:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:50.715 [ 00:25:50.715 { 00:25:50.715 "name": "BaseBdev4", 00:25:50.715 "aliases": [ 00:25:50.715 "5da9f634-dc98-4c8d-a191-b73b727507e0" 00:25:50.715 ], 00:25:50.715 "product_name": "Malloc disk", 00:25:50.715 "block_size": 512, 00:25:50.715 "num_blocks": 65536, 00:25:50.715 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:25:50.715 "assigned_rate_limits": { 00:25:50.715 "rw_ios_per_sec": 0, 00:25:50.715 "rw_mbytes_per_sec": 0, 00:25:50.715 "r_mbytes_per_sec": 0, 00:25:50.715 "w_mbytes_per_sec": 0 00:25:50.715 }, 00:25:50.715 "claimed": false, 00:25:50.715 "zoned": false, 00:25:50.715 "supported_io_types": { 00:25:50.715 "read": true, 00:25:50.715 "write": true, 00:25:50.715 "unmap": true, 00:25:50.715 "flush": true, 00:25:50.715 "reset": true, 00:25:50.715 "nvme_admin": false, 00:25:50.715 "nvme_io": false, 00:25:50.715 "nvme_io_md": false, 00:25:50.715 "write_zeroes": true, 00:25:50.715 "zcopy": true, 00:25:50.715 "get_zone_info": false, 00:25:50.715 "zone_management": false, 00:25:50.715 "zone_append": false, 00:25:50.715 "compare": false, 00:25:50.715 "compare_and_write": false, 00:25:50.715 "abort": true, 00:25:50.715 "seek_hole": false, 00:25:50.715 "seek_data": false, 00:25:50.715 "copy": true, 00:25:50.715 "nvme_iov_md": false 00:25:50.715 }, 00:25:50.715 "memory_domains": [ 00:25:50.715 { 00:25:50.715 "dma_device_id": "system", 00:25:50.715 "dma_device_type": 1 00:25:50.715 }, 00:25:50.715 { 00:25:50.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.715 "dma_device_type": 2 00:25:50.715 } 00:25:50.715 ], 00:25:50.715 "driver_specific": {} 00:25:50.715 } 00:25:50.715 ] 00:25:50.715 11:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:50.715 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:50.715 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:50.715 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:50.973 [2024-07-13 11:37:25.518935] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:50.973 [2024-07-13 11:37:25.519010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:50.973 [2024-07-13 11:37:25.519040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:50.973 [2024-07-13 11:37:25.520534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:50.973 [2024-07-13 11:37:25.520594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.973 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.974 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:51.232 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:51.232 "name": "Existed_Raid", 00:25:51.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.232 "strip_size_kb": 0, 00:25:51.232 "state": "configuring", 00:25:51.232 "raid_level": "raid1", 00:25:51.232 "superblock": false, 00:25:51.232 "num_base_bdevs": 4, 00:25:51.232 "num_base_bdevs_discovered": 3, 00:25:51.232 "num_base_bdevs_operational": 4, 00:25:51.232 "base_bdevs_list": [ 00:25:51.232 { 00:25:51.232 "name": "BaseBdev1", 00:25:51.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.232 "is_configured": false, 00:25:51.232 "data_offset": 0, 00:25:51.232 "data_size": 0 00:25:51.232 }, 00:25:51.232 { 00:25:51.232 "name": "BaseBdev2", 00:25:51.232 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:25:51.232 "is_configured": true, 00:25:51.232 "data_offset": 0, 00:25:51.232 "data_size": 65536 00:25:51.232 }, 00:25:51.232 { 00:25:51.232 "name": "BaseBdev3", 00:25:51.232 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:25:51.232 "is_configured": true, 00:25:51.232 "data_offset": 0, 00:25:51.232 "data_size": 65536 00:25:51.232 }, 00:25:51.232 { 00:25:51.232 "name": "BaseBdev4", 00:25:51.232 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:25:51.232 "is_configured": true, 00:25:51.232 "data_offset": 0, 00:25:51.232 "data_size": 65536 00:25:51.232 } 00:25:51.232 ] 00:25:51.232 }' 00:25:51.232 11:37:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:51.232 11:37:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.798 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:52.057 [2024-07-13 11:37:26.567146] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.057 "name": "Existed_Raid", 00:25:52.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.057 "strip_size_kb": 0, 00:25:52.057 "state": "configuring", 00:25:52.057 "raid_level": "raid1", 00:25:52.057 "superblock": false, 00:25:52.057 "num_base_bdevs": 4, 00:25:52.057 "num_base_bdevs_discovered": 2, 00:25:52.057 "num_base_bdevs_operational": 4, 00:25:52.057 "base_bdevs_list": [ 00:25:52.057 { 00:25:52.057 "name": "BaseBdev1", 00:25:52.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.057 "is_configured": false, 00:25:52.057 "data_offset": 0, 00:25:52.057 "data_size": 0 00:25:52.057 }, 00:25:52.057 { 00:25:52.057 "name": null, 00:25:52.057 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:25:52.057 "is_configured": false, 00:25:52.057 "data_offset": 0, 00:25:52.057 "data_size": 65536 00:25:52.057 }, 00:25:52.057 { 00:25:52.057 "name": "BaseBdev3", 00:25:52.057 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:25:52.057 "is_configured": true, 00:25:52.057 "data_offset": 0, 00:25:52.057 "data_size": 65536 00:25:52.057 }, 00:25:52.057 { 00:25:52.057 "name": "BaseBdev4", 00:25:52.057 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:25:52.057 "is_configured": true, 00:25:52.057 "data_offset": 0, 00:25:52.057 "data_size": 65536 00:25:52.057 } 00:25:52.057 ] 00:25:52.057 }' 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.057 11:37:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.992 11:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.992 11:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:52.992 11:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:52.992 11:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:53.250 [2024-07-13 11:37:27.836946] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:53.250 BaseBdev1 00:25:53.250 11:37:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:53.250 11:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:53.250 11:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:53.250 11:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:53.250 11:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:53.250 11:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:53.250 11:37:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:53.508 11:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:53.508 [ 00:25:53.508 { 00:25:53.508 "name": "BaseBdev1", 00:25:53.508 "aliases": [ 00:25:53.508 "59b5cf46-b798-47da-8ae2-8ea175a6c816" 00:25:53.508 ], 00:25:53.508 "product_name": "Malloc disk", 00:25:53.508 "block_size": 512, 00:25:53.508 "num_blocks": 65536, 00:25:53.508 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:25:53.508 "assigned_rate_limits": { 00:25:53.508 "rw_ios_per_sec": 0, 00:25:53.508 "rw_mbytes_per_sec": 0, 00:25:53.508 "r_mbytes_per_sec": 0, 00:25:53.508 "w_mbytes_per_sec": 0 00:25:53.508 }, 00:25:53.508 "claimed": true, 00:25:53.508 "claim_type": "exclusive_write", 00:25:53.508 "zoned": false, 00:25:53.508 "supported_io_types": { 00:25:53.509 "read": true, 00:25:53.509 "write": true, 00:25:53.509 "unmap": true, 00:25:53.509 "flush": true, 00:25:53.509 "reset": true, 00:25:53.509 "nvme_admin": false, 00:25:53.509 "nvme_io": false, 00:25:53.509 "nvme_io_md": false, 00:25:53.509 "write_zeroes": true, 00:25:53.509 "zcopy": true, 00:25:53.509 "get_zone_info": false, 00:25:53.509 "zone_management": false, 00:25:53.509 "zone_append": false, 00:25:53.509 "compare": false, 00:25:53.509 "compare_and_write": false, 00:25:53.509 "abort": true, 00:25:53.509 "seek_hole": false, 00:25:53.509 "seek_data": false, 00:25:53.509 "copy": true, 00:25:53.509 "nvme_iov_md": false 00:25:53.509 }, 00:25:53.509 "memory_domains": [ 00:25:53.509 { 00:25:53.509 "dma_device_id": "system", 00:25:53.509 "dma_device_type": 1 00:25:53.509 }, 00:25:53.509 { 00:25:53.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:53.509 "dma_device_type": 2 00:25:53.509 } 00:25:53.509 ], 00:25:53.509 "driver_specific": {} 00:25:53.509 } 00:25:53.509 ] 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.509 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.767 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.767 "name": "Existed_Raid", 00:25:53.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.767 "strip_size_kb": 0, 00:25:53.767 "state": "configuring", 00:25:53.767 "raid_level": "raid1", 00:25:53.767 "superblock": false, 00:25:53.767 "num_base_bdevs": 4, 00:25:53.767 "num_base_bdevs_discovered": 3, 00:25:53.767 "num_base_bdevs_operational": 4, 00:25:53.767 "base_bdevs_list": [ 00:25:53.767 { 00:25:53.767 "name": "BaseBdev1", 00:25:53.767 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:25:53.767 "is_configured": true, 00:25:53.767 "data_offset": 0, 00:25:53.767 "data_size": 65536 00:25:53.767 }, 00:25:53.767 { 00:25:53.767 "name": null, 00:25:53.767 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:25:53.767 "is_configured": false, 00:25:53.767 "data_offset": 0, 00:25:53.767 "data_size": 65536 00:25:53.767 }, 00:25:53.767 { 00:25:53.767 "name": "BaseBdev3", 00:25:53.767 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:25:53.767 "is_configured": true, 00:25:53.767 "data_offset": 0, 00:25:53.767 "data_size": 65536 00:25:53.767 }, 00:25:53.767 { 00:25:53.767 "name": "BaseBdev4", 00:25:53.767 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:25:53.767 "is_configured": true, 00:25:53.767 "data_offset": 0, 00:25:53.767 "data_size": 65536 00:25:53.767 } 00:25:53.767 ] 00:25:53.767 }' 00:25:53.767 11:37:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.767 11:37:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.702 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.702 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:54.702 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:54.702 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:54.961 [2024-07-13 11:37:29.533314] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.961 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.219 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.219 "name": "Existed_Raid", 00:25:55.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.219 "strip_size_kb": 0, 00:25:55.219 "state": "configuring", 00:25:55.219 "raid_level": "raid1", 00:25:55.219 "superblock": false, 00:25:55.219 "num_base_bdevs": 4, 00:25:55.219 "num_base_bdevs_discovered": 2, 00:25:55.219 "num_base_bdevs_operational": 4, 00:25:55.219 "base_bdevs_list": [ 00:25:55.219 { 00:25:55.219 "name": "BaseBdev1", 00:25:55.219 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:25:55.219 "is_configured": true, 00:25:55.219 "data_offset": 0, 00:25:55.219 "data_size": 65536 00:25:55.219 }, 00:25:55.219 { 00:25:55.219 "name": null, 00:25:55.219 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:25:55.219 "is_configured": false, 00:25:55.219 "data_offset": 0, 00:25:55.219 "data_size": 65536 00:25:55.219 }, 00:25:55.219 { 00:25:55.219 "name": null, 00:25:55.219 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:25:55.219 "is_configured": false, 00:25:55.219 "data_offset": 0, 00:25:55.219 "data_size": 65536 00:25:55.219 }, 00:25:55.219 { 00:25:55.219 "name": "BaseBdev4", 00:25:55.219 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:25:55.219 "is_configured": true, 00:25:55.219 "data_offset": 0, 00:25:55.219 "data_size": 65536 00:25:55.219 } 00:25:55.219 ] 00:25:55.219 }' 00:25:55.219 11:37:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.219 11:37:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:55.785 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.785 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:56.043 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:56.043 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:56.302 [2024-07-13 11:37:30.897610] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.302 11:37:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:56.561 11:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.561 "name": "Existed_Raid", 00:25:56.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.561 "strip_size_kb": 0, 00:25:56.561 "state": "configuring", 00:25:56.561 "raid_level": "raid1", 00:25:56.561 "superblock": false, 00:25:56.561 "num_base_bdevs": 4, 00:25:56.561 "num_base_bdevs_discovered": 3, 00:25:56.561 "num_base_bdevs_operational": 4, 00:25:56.561 "base_bdevs_list": [ 00:25:56.561 { 00:25:56.561 "name": "BaseBdev1", 00:25:56.561 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:25:56.561 "is_configured": true, 00:25:56.561 "data_offset": 0, 00:25:56.561 "data_size": 65536 00:25:56.561 }, 00:25:56.561 { 00:25:56.561 "name": null, 00:25:56.561 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:25:56.561 "is_configured": false, 00:25:56.561 "data_offset": 0, 00:25:56.561 "data_size": 65536 00:25:56.561 }, 00:25:56.561 { 00:25:56.561 "name": "BaseBdev3", 00:25:56.561 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:25:56.561 "is_configured": true, 00:25:56.561 "data_offset": 0, 00:25:56.561 "data_size": 65536 00:25:56.561 }, 00:25:56.561 { 00:25:56.561 "name": "BaseBdev4", 00:25:56.561 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:25:56.561 "is_configured": true, 00:25:56.561 "data_offset": 0, 00:25:56.561 "data_size": 65536 00:25:56.561 } 00:25:56.561 ] 00:25:56.561 }' 00:25:56.561 11:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.561 11:37:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.129 11:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.129 11:37:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:57.397 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:57.397 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:57.660 [2024-07-13 11:37:32.249901] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.660 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.919 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.919 "name": "Existed_Raid", 00:25:57.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.919 "strip_size_kb": 0, 00:25:57.919 "state": "configuring", 00:25:57.919 "raid_level": "raid1", 00:25:57.919 "superblock": false, 00:25:57.919 "num_base_bdevs": 4, 00:25:57.919 "num_base_bdevs_discovered": 2, 00:25:57.919 "num_base_bdevs_operational": 4, 00:25:57.919 "base_bdevs_list": [ 00:25:57.919 { 00:25:57.919 "name": null, 00:25:57.919 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:25:57.919 "is_configured": false, 00:25:57.919 "data_offset": 0, 00:25:57.919 "data_size": 65536 00:25:57.919 }, 00:25:57.919 { 00:25:57.919 "name": null, 00:25:57.919 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:25:57.919 "is_configured": false, 00:25:57.919 "data_offset": 0, 00:25:57.919 "data_size": 65536 00:25:57.919 }, 00:25:57.919 { 00:25:57.919 "name": "BaseBdev3", 00:25:57.919 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:25:57.919 "is_configured": true, 00:25:57.919 "data_offset": 0, 00:25:57.919 "data_size": 65536 00:25:57.919 }, 00:25:57.919 { 00:25:57.919 "name": "BaseBdev4", 00:25:57.919 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:25:57.919 "is_configured": true, 00:25:57.919 "data_offset": 0, 00:25:57.919 "data_size": 65536 00:25:57.919 } 00:25:57.919 ] 00:25:57.919 }' 00:25:57.919 11:37:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.919 11:37:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.483 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.483 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:58.740 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:58.740 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:58.998 [2024-07-13 11:37:33.734084] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.256 "name": "Existed_Raid", 00:25:59.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.256 "strip_size_kb": 0, 00:25:59.256 "state": "configuring", 00:25:59.256 "raid_level": "raid1", 00:25:59.256 "superblock": false, 00:25:59.256 "num_base_bdevs": 4, 00:25:59.256 "num_base_bdevs_discovered": 3, 00:25:59.256 "num_base_bdevs_operational": 4, 00:25:59.256 "base_bdevs_list": [ 00:25:59.256 { 00:25:59.256 "name": null, 00:25:59.256 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:25:59.256 "is_configured": false, 00:25:59.256 "data_offset": 0, 00:25:59.256 "data_size": 65536 00:25:59.256 }, 00:25:59.256 { 00:25:59.256 "name": "BaseBdev2", 00:25:59.256 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:25:59.256 "is_configured": true, 00:25:59.256 "data_offset": 0, 00:25:59.256 "data_size": 65536 00:25:59.256 }, 00:25:59.256 { 00:25:59.256 "name": "BaseBdev3", 00:25:59.256 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:25:59.256 "is_configured": true, 00:25:59.256 "data_offset": 0, 00:25:59.256 "data_size": 65536 00:25:59.256 }, 00:25:59.256 { 00:25:59.256 "name": "BaseBdev4", 00:25:59.256 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:25:59.256 "is_configured": true, 00:25:59.256 "data_offset": 0, 00:25:59.256 "data_size": 65536 00:25:59.256 } 00:25:59.256 ] 00:25:59.256 }' 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.256 11:37:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.822 11:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.822 11:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:00.079 11:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:00.079 11:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.079 11:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:00.337 11:37:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 59b5cf46-b798-47da-8ae2-8ea175a6c816 00:26:00.595 [2024-07-13 11:37:35.259361] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:00.595 [2024-07-13 11:37:35.259647] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:26:00.595 [2024-07-13 11:37:35.259686] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:00.595 [2024-07-13 11:37:35.259900] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:00.595 [2024-07-13 11:37:35.260348] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:26:00.595 [2024-07-13 11:37:35.260502] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:26:00.595 [2024-07-13 11:37:35.260854] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:00.595 NewBaseBdev 00:26:00.595 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:00.595 11:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:00.595 11:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:00.595 11:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:26:00.595 11:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:00.595 11:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:00.596 11:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:00.853 11:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:01.112 [ 00:26:01.112 { 00:26:01.112 "name": "NewBaseBdev", 00:26:01.112 "aliases": [ 00:26:01.112 "59b5cf46-b798-47da-8ae2-8ea175a6c816" 00:26:01.112 ], 00:26:01.112 "product_name": "Malloc disk", 00:26:01.112 "block_size": 512, 00:26:01.112 "num_blocks": 65536, 00:26:01.112 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:26:01.112 "assigned_rate_limits": { 00:26:01.112 "rw_ios_per_sec": 0, 00:26:01.112 "rw_mbytes_per_sec": 0, 00:26:01.112 "r_mbytes_per_sec": 0, 00:26:01.112 "w_mbytes_per_sec": 0 00:26:01.112 }, 00:26:01.112 "claimed": true, 00:26:01.112 "claim_type": "exclusive_write", 00:26:01.112 "zoned": false, 00:26:01.112 "supported_io_types": { 00:26:01.112 "read": true, 00:26:01.112 "write": true, 00:26:01.112 "unmap": true, 00:26:01.112 "flush": true, 00:26:01.112 "reset": true, 00:26:01.112 "nvme_admin": false, 00:26:01.113 "nvme_io": false, 00:26:01.113 "nvme_io_md": false, 00:26:01.113 "write_zeroes": true, 00:26:01.113 "zcopy": true, 00:26:01.113 "get_zone_info": false, 00:26:01.113 "zone_management": false, 00:26:01.113 "zone_append": false, 00:26:01.113 "compare": false, 00:26:01.113 "compare_and_write": false, 00:26:01.113 "abort": true, 00:26:01.113 "seek_hole": false, 00:26:01.113 "seek_data": false, 00:26:01.113 "copy": true, 00:26:01.113 "nvme_iov_md": false 00:26:01.113 }, 00:26:01.113 "memory_domains": [ 00:26:01.113 { 00:26:01.113 "dma_device_id": "system", 00:26:01.113 "dma_device_type": 1 00:26:01.113 }, 00:26:01.113 { 00:26:01.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.113 "dma_device_type": 2 00:26:01.113 } 00:26:01.113 ], 00:26:01.113 "driver_specific": {} 00:26:01.113 } 00:26:01.113 ] 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.113 11:37:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.372 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:01.372 "name": "Existed_Raid", 00:26:01.372 "uuid": "8755737b-10e8-426f-bfe8-f7c1f755993d", 00:26:01.372 "strip_size_kb": 0, 00:26:01.372 "state": "online", 00:26:01.372 "raid_level": "raid1", 00:26:01.372 "superblock": false, 00:26:01.372 "num_base_bdevs": 4, 00:26:01.372 "num_base_bdevs_discovered": 4, 00:26:01.372 "num_base_bdevs_operational": 4, 00:26:01.372 "base_bdevs_list": [ 00:26:01.372 { 00:26:01.372 "name": "NewBaseBdev", 00:26:01.372 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:26:01.372 "is_configured": true, 00:26:01.372 "data_offset": 0, 00:26:01.372 "data_size": 65536 00:26:01.372 }, 00:26:01.372 { 00:26:01.372 "name": "BaseBdev2", 00:26:01.372 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:26:01.372 "is_configured": true, 00:26:01.372 "data_offset": 0, 00:26:01.372 "data_size": 65536 00:26:01.372 }, 00:26:01.372 { 00:26:01.372 "name": "BaseBdev3", 00:26:01.372 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:26:01.372 "is_configured": true, 00:26:01.372 "data_offset": 0, 00:26:01.372 "data_size": 65536 00:26:01.372 }, 00:26:01.372 { 00:26:01.372 "name": "BaseBdev4", 00:26:01.372 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:26:01.372 "is_configured": true, 00:26:01.372 "data_offset": 0, 00:26:01.372 "data_size": 65536 00:26:01.372 } 00:26:01.372 ] 00:26:01.372 }' 00:26:01.372 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:01.372 11:37:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.991 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:01.991 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:01.991 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:01.991 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:01.991 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:01.991 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:01.991 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:01.991 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:02.266 [2024-07-13 11:37:36.840057] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:02.266 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:02.266 "name": "Existed_Raid", 00:26:02.266 "aliases": [ 00:26:02.266 "8755737b-10e8-426f-bfe8-f7c1f755993d" 00:26:02.266 ], 00:26:02.266 "product_name": "Raid Volume", 00:26:02.266 "block_size": 512, 00:26:02.266 "num_blocks": 65536, 00:26:02.266 "uuid": "8755737b-10e8-426f-bfe8-f7c1f755993d", 00:26:02.266 "assigned_rate_limits": { 00:26:02.266 "rw_ios_per_sec": 0, 00:26:02.266 "rw_mbytes_per_sec": 0, 00:26:02.266 "r_mbytes_per_sec": 0, 00:26:02.266 "w_mbytes_per_sec": 0 00:26:02.266 }, 00:26:02.266 "claimed": false, 00:26:02.266 "zoned": false, 00:26:02.266 "supported_io_types": { 00:26:02.266 "read": true, 00:26:02.266 "write": true, 00:26:02.266 "unmap": false, 00:26:02.266 "flush": false, 00:26:02.266 "reset": true, 00:26:02.266 "nvme_admin": false, 00:26:02.266 "nvme_io": false, 00:26:02.266 "nvme_io_md": false, 00:26:02.266 "write_zeroes": true, 00:26:02.266 "zcopy": false, 00:26:02.266 "get_zone_info": false, 00:26:02.266 "zone_management": false, 00:26:02.266 "zone_append": false, 00:26:02.266 "compare": false, 00:26:02.266 "compare_and_write": false, 00:26:02.266 "abort": false, 00:26:02.266 "seek_hole": false, 00:26:02.266 "seek_data": false, 00:26:02.266 "copy": false, 00:26:02.266 "nvme_iov_md": false 00:26:02.266 }, 00:26:02.266 "memory_domains": [ 00:26:02.266 { 00:26:02.266 "dma_device_id": "system", 00:26:02.266 "dma_device_type": 1 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.266 "dma_device_type": 2 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "dma_device_id": "system", 00:26:02.266 "dma_device_type": 1 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.266 "dma_device_type": 2 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "dma_device_id": "system", 00:26:02.266 "dma_device_type": 1 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.266 "dma_device_type": 2 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "dma_device_id": "system", 00:26:02.266 "dma_device_type": 1 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.266 "dma_device_type": 2 00:26:02.266 } 00:26:02.266 ], 00:26:02.266 "driver_specific": { 00:26:02.266 "raid": { 00:26:02.266 "uuid": "8755737b-10e8-426f-bfe8-f7c1f755993d", 00:26:02.266 "strip_size_kb": 0, 00:26:02.266 "state": "online", 00:26:02.266 "raid_level": "raid1", 00:26:02.266 "superblock": false, 00:26:02.266 "num_base_bdevs": 4, 00:26:02.266 "num_base_bdevs_discovered": 4, 00:26:02.266 "num_base_bdevs_operational": 4, 00:26:02.266 "base_bdevs_list": [ 00:26:02.266 { 00:26:02.266 "name": "NewBaseBdev", 00:26:02.266 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:26:02.266 "is_configured": true, 00:26:02.266 "data_offset": 0, 00:26:02.266 "data_size": 65536 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "name": "BaseBdev2", 00:26:02.266 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:26:02.266 "is_configured": true, 00:26:02.266 "data_offset": 0, 00:26:02.266 "data_size": 65536 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "name": "BaseBdev3", 00:26:02.266 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:26:02.266 "is_configured": true, 00:26:02.266 "data_offset": 0, 00:26:02.266 "data_size": 65536 00:26:02.266 }, 00:26:02.266 { 00:26:02.266 "name": "BaseBdev4", 00:26:02.266 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:26:02.266 "is_configured": true, 00:26:02.266 "data_offset": 0, 00:26:02.266 "data_size": 65536 00:26:02.266 } 00:26:02.266 ] 00:26:02.266 } 00:26:02.266 } 00:26:02.266 }' 00:26:02.266 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:02.266 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:02.266 BaseBdev2 00:26:02.266 BaseBdev3 00:26:02.266 BaseBdev4' 00:26:02.266 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:02.266 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:02.266 11:37:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:02.532 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:02.532 "name": "NewBaseBdev", 00:26:02.532 "aliases": [ 00:26:02.532 "59b5cf46-b798-47da-8ae2-8ea175a6c816" 00:26:02.532 ], 00:26:02.532 "product_name": "Malloc disk", 00:26:02.532 "block_size": 512, 00:26:02.532 "num_blocks": 65536, 00:26:02.532 "uuid": "59b5cf46-b798-47da-8ae2-8ea175a6c816", 00:26:02.532 "assigned_rate_limits": { 00:26:02.532 "rw_ios_per_sec": 0, 00:26:02.532 "rw_mbytes_per_sec": 0, 00:26:02.532 "r_mbytes_per_sec": 0, 00:26:02.532 "w_mbytes_per_sec": 0 00:26:02.532 }, 00:26:02.532 "claimed": true, 00:26:02.532 "claim_type": "exclusive_write", 00:26:02.532 "zoned": false, 00:26:02.532 "supported_io_types": { 00:26:02.532 "read": true, 00:26:02.532 "write": true, 00:26:02.532 "unmap": true, 00:26:02.532 "flush": true, 00:26:02.532 "reset": true, 00:26:02.532 "nvme_admin": false, 00:26:02.532 "nvme_io": false, 00:26:02.532 "nvme_io_md": false, 00:26:02.532 "write_zeroes": true, 00:26:02.532 "zcopy": true, 00:26:02.532 "get_zone_info": false, 00:26:02.532 "zone_management": false, 00:26:02.532 "zone_append": false, 00:26:02.532 "compare": false, 00:26:02.532 "compare_and_write": false, 00:26:02.532 "abort": true, 00:26:02.532 "seek_hole": false, 00:26:02.532 "seek_data": false, 00:26:02.532 "copy": true, 00:26:02.532 "nvme_iov_md": false 00:26:02.532 }, 00:26:02.532 "memory_domains": [ 00:26:02.532 { 00:26:02.532 "dma_device_id": "system", 00:26:02.532 "dma_device_type": 1 00:26:02.532 }, 00:26:02.532 { 00:26:02.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.532 "dma_device_type": 2 00:26:02.532 } 00:26:02.532 ], 00:26:02.532 "driver_specific": {} 00:26:02.532 }' 00:26:02.532 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.532 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.532 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:02.532 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.791 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.791 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:02.791 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.791 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.791 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:02.791 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:03.049 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:03.049 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:03.049 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:03.049 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:03.049 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:03.049 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:03.049 "name": "BaseBdev2", 00:26:03.049 "aliases": [ 00:26:03.049 "50682e54-f839-4b7e-89b0-3f1ab0bcc27f" 00:26:03.049 ], 00:26:03.049 "product_name": "Malloc disk", 00:26:03.049 "block_size": 512, 00:26:03.049 "num_blocks": 65536, 00:26:03.049 "uuid": "50682e54-f839-4b7e-89b0-3f1ab0bcc27f", 00:26:03.049 "assigned_rate_limits": { 00:26:03.049 "rw_ios_per_sec": 0, 00:26:03.049 "rw_mbytes_per_sec": 0, 00:26:03.049 "r_mbytes_per_sec": 0, 00:26:03.049 "w_mbytes_per_sec": 0 00:26:03.049 }, 00:26:03.049 "claimed": true, 00:26:03.049 "claim_type": "exclusive_write", 00:26:03.049 "zoned": false, 00:26:03.049 "supported_io_types": { 00:26:03.049 "read": true, 00:26:03.049 "write": true, 00:26:03.049 "unmap": true, 00:26:03.049 "flush": true, 00:26:03.049 "reset": true, 00:26:03.049 "nvme_admin": false, 00:26:03.049 "nvme_io": false, 00:26:03.049 "nvme_io_md": false, 00:26:03.049 "write_zeroes": true, 00:26:03.049 "zcopy": true, 00:26:03.049 "get_zone_info": false, 00:26:03.049 "zone_management": false, 00:26:03.049 "zone_append": false, 00:26:03.049 "compare": false, 00:26:03.049 "compare_and_write": false, 00:26:03.049 "abort": true, 00:26:03.049 "seek_hole": false, 00:26:03.049 "seek_data": false, 00:26:03.049 "copy": true, 00:26:03.049 "nvme_iov_md": false 00:26:03.049 }, 00:26:03.049 "memory_domains": [ 00:26:03.049 { 00:26:03.049 "dma_device_id": "system", 00:26:03.049 "dma_device_type": 1 00:26:03.049 }, 00:26:03.049 { 00:26:03.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.049 "dma_device_type": 2 00:26:03.049 } 00:26:03.049 ], 00:26:03.049 "driver_specific": {} 00:26:03.049 }' 00:26:03.049 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:03.308 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:03.308 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:03.308 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:03.308 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:03.308 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:03.308 11:37:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:03.308 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:03.566 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:03.566 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:03.566 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:03.566 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:03.566 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:03.566 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:03.566 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:03.826 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:03.826 "name": "BaseBdev3", 00:26:03.826 "aliases": [ 00:26:03.826 "4d40e842-7d60-46bb-a31b-629a957a0a1b" 00:26:03.826 ], 00:26:03.826 "product_name": "Malloc disk", 00:26:03.826 "block_size": 512, 00:26:03.826 "num_blocks": 65536, 00:26:03.826 "uuid": "4d40e842-7d60-46bb-a31b-629a957a0a1b", 00:26:03.826 "assigned_rate_limits": { 00:26:03.826 "rw_ios_per_sec": 0, 00:26:03.826 "rw_mbytes_per_sec": 0, 00:26:03.826 "r_mbytes_per_sec": 0, 00:26:03.826 "w_mbytes_per_sec": 0 00:26:03.826 }, 00:26:03.826 "claimed": true, 00:26:03.826 "claim_type": "exclusive_write", 00:26:03.826 "zoned": false, 00:26:03.826 "supported_io_types": { 00:26:03.826 "read": true, 00:26:03.826 "write": true, 00:26:03.826 "unmap": true, 00:26:03.826 "flush": true, 00:26:03.826 "reset": true, 00:26:03.826 "nvme_admin": false, 00:26:03.826 "nvme_io": false, 00:26:03.826 "nvme_io_md": false, 00:26:03.826 "write_zeroes": true, 00:26:03.826 "zcopy": true, 00:26:03.826 "get_zone_info": false, 00:26:03.826 "zone_management": false, 00:26:03.826 "zone_append": false, 00:26:03.826 "compare": false, 00:26:03.826 "compare_and_write": false, 00:26:03.826 "abort": true, 00:26:03.826 "seek_hole": false, 00:26:03.826 "seek_data": false, 00:26:03.826 "copy": true, 00:26:03.826 "nvme_iov_md": false 00:26:03.826 }, 00:26:03.826 "memory_domains": [ 00:26:03.826 { 00:26:03.826 "dma_device_id": "system", 00:26:03.826 "dma_device_type": 1 00:26:03.826 }, 00:26:03.826 { 00:26:03.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:03.826 "dma_device_type": 2 00:26:03.826 } 00:26:03.826 ], 00:26:03.826 "driver_specific": {} 00:26:03.826 }' 00:26:03.826 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:03.826 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:03.826 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:03.826 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:03.826 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:03.826 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:04.084 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:04.084 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:04.084 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:04.084 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:04.084 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:04.084 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:04.084 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:04.085 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:04.085 11:37:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:04.343 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:04.343 "name": "BaseBdev4", 00:26:04.343 "aliases": [ 00:26:04.343 "5da9f634-dc98-4c8d-a191-b73b727507e0" 00:26:04.343 ], 00:26:04.343 "product_name": "Malloc disk", 00:26:04.343 "block_size": 512, 00:26:04.343 "num_blocks": 65536, 00:26:04.343 "uuid": "5da9f634-dc98-4c8d-a191-b73b727507e0", 00:26:04.343 "assigned_rate_limits": { 00:26:04.343 "rw_ios_per_sec": 0, 00:26:04.343 "rw_mbytes_per_sec": 0, 00:26:04.343 "r_mbytes_per_sec": 0, 00:26:04.343 "w_mbytes_per_sec": 0 00:26:04.343 }, 00:26:04.343 "claimed": true, 00:26:04.343 "claim_type": "exclusive_write", 00:26:04.343 "zoned": false, 00:26:04.343 "supported_io_types": { 00:26:04.343 "read": true, 00:26:04.343 "write": true, 00:26:04.343 "unmap": true, 00:26:04.343 "flush": true, 00:26:04.343 "reset": true, 00:26:04.343 "nvme_admin": false, 00:26:04.343 "nvme_io": false, 00:26:04.343 "nvme_io_md": false, 00:26:04.344 "write_zeroes": true, 00:26:04.344 "zcopy": true, 00:26:04.344 "get_zone_info": false, 00:26:04.344 "zone_management": false, 00:26:04.344 "zone_append": false, 00:26:04.344 "compare": false, 00:26:04.344 "compare_and_write": false, 00:26:04.344 "abort": true, 00:26:04.344 "seek_hole": false, 00:26:04.344 "seek_data": false, 00:26:04.344 "copy": true, 00:26:04.344 "nvme_iov_md": false 00:26:04.344 }, 00:26:04.344 "memory_domains": [ 00:26:04.344 { 00:26:04.344 "dma_device_id": "system", 00:26:04.344 "dma_device_type": 1 00:26:04.344 }, 00:26:04.344 { 00:26:04.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.344 "dma_device_type": 2 00:26:04.344 } 00:26:04.344 ], 00:26:04.344 "driver_specific": {} 00:26:04.344 }' 00:26:04.344 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:04.602 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:04.602 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:04.602 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:04.602 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:04.602 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:04.602 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:04.861 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:04.861 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:04.861 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:04.861 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:04.861 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:04.861 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:05.120 [2024-07-13 11:37:39.728339] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:05.120 [2024-07-13 11:37:39.728491] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:05.120 [2024-07-13 11:37:39.728705] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:05.120 [2024-07-13 11:37:39.729128] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:05.120 [2024-07-13 11:37:39.729246] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 141103 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 141103 ']' 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 141103 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 141103 00:26:05.120 killing process with pid 141103 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 141103' 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 141103 00:26:05.120 11:37:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 141103 00:26:05.120 [2024-07-13 11:37:39.762981] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:05.379 [2024-07-13 11:37:40.035170] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:06.316 ************************************ 00:26:06.316 END TEST raid_state_function_test 00:26:06.316 ************************************ 00:26:06.316 11:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:26:06.316 00:26:06.316 real 0m34.375s 00:26:06.316 user 1m4.710s 00:26:06.316 sys 0m3.750s 00:26:06.316 11:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:06.316 11:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.575 11:37:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:06.575 11:37:41 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:26:06.575 11:37:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:06.575 11:37:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.575 11:37:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:06.575 ************************************ 00:26:06.575 START TEST raid_state_function_test_sb 00:26:06.575 ************************************ 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=142244 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:06.575 Process raid pid: 142244 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 142244' 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 142244 /var/tmp/spdk-raid.sock 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 142244 ']' 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:06.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:06.575 11:37:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:06.575 [2024-07-13 11:37:41.202160] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:06.575 [2024-07-13 11:37:41.202530] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.834 [2024-07-13 11:37:41.375539] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.093 [2024-07-13 11:37:41.631107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.093 [2024-07-13 11:37:41.819595] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:07.660 [2024-07-13 11:37:42.366487] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:07.660 [2024-07-13 11:37:42.366809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:07.660 [2024-07-13 11:37:42.366943] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:07.660 [2024-07-13 11:37:42.367004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:07.660 [2024-07-13 11:37:42.367089] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:07.660 [2024-07-13 11:37:42.367204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:07.660 [2024-07-13 11:37:42.367297] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:07.660 [2024-07-13 11:37:42.367359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.660 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:07.919 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.919 "name": "Existed_Raid", 00:26:07.919 "uuid": "a8281fc2-ef6f-4b66-b209-25e563d5b244", 00:26:07.919 "strip_size_kb": 0, 00:26:07.919 "state": "configuring", 00:26:07.919 "raid_level": "raid1", 00:26:07.919 "superblock": true, 00:26:07.919 "num_base_bdevs": 4, 00:26:07.919 "num_base_bdevs_discovered": 0, 00:26:07.919 "num_base_bdevs_operational": 4, 00:26:07.919 "base_bdevs_list": [ 00:26:07.919 { 00:26:07.919 "name": "BaseBdev1", 00:26:07.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.919 "is_configured": false, 00:26:07.919 "data_offset": 0, 00:26:07.919 "data_size": 0 00:26:07.919 }, 00:26:07.919 { 00:26:07.919 "name": "BaseBdev2", 00:26:07.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.919 "is_configured": false, 00:26:07.919 "data_offset": 0, 00:26:07.919 "data_size": 0 00:26:07.919 }, 00:26:07.919 { 00:26:07.919 "name": "BaseBdev3", 00:26:07.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.919 "is_configured": false, 00:26:07.919 "data_offset": 0, 00:26:07.919 "data_size": 0 00:26:07.919 }, 00:26:07.919 { 00:26:07.919 "name": "BaseBdev4", 00:26:07.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:07.919 "is_configured": false, 00:26:07.919 "data_offset": 0, 00:26:07.919 "data_size": 0 00:26:07.919 } 00:26:07.919 ] 00:26:07.919 }' 00:26:07.919 11:37:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.919 11:37:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:08.485 11:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:08.744 [2024-07-13 11:37:43.338539] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:08.744 [2024-07-13 11:37:43.338687] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:26:08.744 11:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:09.001 [2024-07-13 11:37:43.598601] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:09.001 [2024-07-13 11:37:43.598762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:09.001 [2024-07-13 11:37:43.598868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:09.001 [2024-07-13 11:37:43.598947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:09.001 [2024-07-13 11:37:43.599167] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:09.001 [2024-07-13 11:37:43.599235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:09.001 [2024-07-13 11:37:43.599262] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:09.001 [2024-07-13 11:37:43.599393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:09.001 11:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:09.259 [2024-07-13 11:37:43.817053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:09.259 BaseBdev1 00:26:09.259 11:37:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:09.259 11:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:09.259 11:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:09.259 11:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:09.259 11:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:09.259 11:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:09.259 11:37:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:09.517 [ 00:26:09.517 { 00:26:09.517 "name": "BaseBdev1", 00:26:09.517 "aliases": [ 00:26:09.517 "1aebd095-5dc5-447a-8794-82be282fc587" 00:26:09.517 ], 00:26:09.517 "product_name": "Malloc disk", 00:26:09.517 "block_size": 512, 00:26:09.517 "num_blocks": 65536, 00:26:09.517 "uuid": "1aebd095-5dc5-447a-8794-82be282fc587", 00:26:09.517 "assigned_rate_limits": { 00:26:09.517 "rw_ios_per_sec": 0, 00:26:09.517 "rw_mbytes_per_sec": 0, 00:26:09.517 "r_mbytes_per_sec": 0, 00:26:09.517 "w_mbytes_per_sec": 0 00:26:09.517 }, 00:26:09.517 "claimed": true, 00:26:09.517 "claim_type": "exclusive_write", 00:26:09.517 "zoned": false, 00:26:09.517 "supported_io_types": { 00:26:09.517 "read": true, 00:26:09.517 "write": true, 00:26:09.517 "unmap": true, 00:26:09.517 "flush": true, 00:26:09.517 "reset": true, 00:26:09.517 "nvme_admin": false, 00:26:09.517 "nvme_io": false, 00:26:09.517 "nvme_io_md": false, 00:26:09.517 "write_zeroes": true, 00:26:09.517 "zcopy": true, 00:26:09.517 "get_zone_info": false, 00:26:09.517 "zone_management": false, 00:26:09.517 "zone_append": false, 00:26:09.517 "compare": false, 00:26:09.517 "compare_and_write": false, 00:26:09.517 "abort": true, 00:26:09.517 "seek_hole": false, 00:26:09.517 "seek_data": false, 00:26:09.517 "copy": true, 00:26:09.517 "nvme_iov_md": false 00:26:09.517 }, 00:26:09.517 "memory_domains": [ 00:26:09.517 { 00:26:09.517 "dma_device_id": "system", 00:26:09.517 "dma_device_type": 1 00:26:09.517 }, 00:26:09.517 { 00:26:09.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:09.517 "dma_device_type": 2 00:26:09.517 } 00:26:09.517 ], 00:26:09.517 "driver_specific": {} 00:26:09.517 } 00:26:09.517 ] 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.517 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:09.775 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:09.775 "name": "Existed_Raid", 00:26:09.775 "uuid": "09e78c70-0ba4-4b76-8025-4d398681b864", 00:26:09.775 "strip_size_kb": 0, 00:26:09.775 "state": "configuring", 00:26:09.775 "raid_level": "raid1", 00:26:09.775 "superblock": true, 00:26:09.775 "num_base_bdevs": 4, 00:26:09.775 "num_base_bdevs_discovered": 1, 00:26:09.775 "num_base_bdevs_operational": 4, 00:26:09.775 "base_bdevs_list": [ 00:26:09.775 { 00:26:09.775 "name": "BaseBdev1", 00:26:09.775 "uuid": "1aebd095-5dc5-447a-8794-82be282fc587", 00:26:09.775 "is_configured": true, 00:26:09.775 "data_offset": 2048, 00:26:09.775 "data_size": 63488 00:26:09.775 }, 00:26:09.775 { 00:26:09.775 "name": "BaseBdev2", 00:26:09.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.775 "is_configured": false, 00:26:09.775 "data_offset": 0, 00:26:09.775 "data_size": 0 00:26:09.775 }, 00:26:09.775 { 00:26:09.775 "name": "BaseBdev3", 00:26:09.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.775 "is_configured": false, 00:26:09.775 "data_offset": 0, 00:26:09.775 "data_size": 0 00:26:09.775 }, 00:26:09.775 { 00:26:09.775 "name": "BaseBdev4", 00:26:09.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:09.775 "is_configured": false, 00:26:09.775 "data_offset": 0, 00:26:09.775 "data_size": 0 00:26:09.775 } 00:26:09.775 ] 00:26:09.775 }' 00:26:09.775 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:09.775 11:37:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.341 11:37:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:10.598 [2024-07-13 11:37:45.261418] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:10.598 [2024-07-13 11:37:45.261654] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:26:10.598 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:10.856 [2024-07-13 11:37:45.449416] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:10.856 [2024-07-13 11:37:45.451420] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:10.856 [2024-07-13 11:37:45.451581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:10.856 [2024-07-13 11:37:45.451676] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:10.856 [2024-07-13 11:37:45.451797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:10.856 [2024-07-13 11:37:45.451884] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:10.856 [2024-07-13 11:37:45.452012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:10.856 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.114 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.114 "name": "Existed_Raid", 00:26:11.114 "uuid": "8ad0f10e-e32a-4f96-a9e5-b13b18cce51c", 00:26:11.114 "strip_size_kb": 0, 00:26:11.114 "state": "configuring", 00:26:11.114 "raid_level": "raid1", 00:26:11.114 "superblock": true, 00:26:11.114 "num_base_bdevs": 4, 00:26:11.114 "num_base_bdevs_discovered": 1, 00:26:11.114 "num_base_bdevs_operational": 4, 00:26:11.114 "base_bdevs_list": [ 00:26:11.114 { 00:26:11.114 "name": "BaseBdev1", 00:26:11.114 "uuid": "1aebd095-5dc5-447a-8794-82be282fc587", 00:26:11.114 "is_configured": true, 00:26:11.114 "data_offset": 2048, 00:26:11.114 "data_size": 63488 00:26:11.114 }, 00:26:11.114 { 00:26:11.114 "name": "BaseBdev2", 00:26:11.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.114 "is_configured": false, 00:26:11.114 "data_offset": 0, 00:26:11.114 "data_size": 0 00:26:11.114 }, 00:26:11.114 { 00:26:11.114 "name": "BaseBdev3", 00:26:11.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.114 "is_configured": false, 00:26:11.114 "data_offset": 0, 00:26:11.114 "data_size": 0 00:26:11.114 }, 00:26:11.114 { 00:26:11.114 "name": "BaseBdev4", 00:26:11.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.114 "is_configured": false, 00:26:11.114 "data_offset": 0, 00:26:11.114 "data_size": 0 00:26:11.114 } 00:26:11.114 ] 00:26:11.114 }' 00:26:11.114 11:37:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.114 11:37:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:11.679 11:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:11.936 [2024-07-13 11:37:46.557783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:11.936 BaseBdev2 00:26:11.936 11:37:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:11.936 11:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:11.936 11:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:11.936 11:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:11.936 11:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:11.936 11:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:11.937 11:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:12.194 11:37:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:12.452 [ 00:26:12.452 { 00:26:12.452 "name": "BaseBdev2", 00:26:12.452 "aliases": [ 00:26:12.452 "d1635b81-e787-4b35-8383-da761731e036" 00:26:12.452 ], 00:26:12.452 "product_name": "Malloc disk", 00:26:12.452 "block_size": 512, 00:26:12.452 "num_blocks": 65536, 00:26:12.452 "uuid": "d1635b81-e787-4b35-8383-da761731e036", 00:26:12.452 "assigned_rate_limits": { 00:26:12.452 "rw_ios_per_sec": 0, 00:26:12.452 "rw_mbytes_per_sec": 0, 00:26:12.452 "r_mbytes_per_sec": 0, 00:26:12.452 "w_mbytes_per_sec": 0 00:26:12.452 }, 00:26:12.452 "claimed": true, 00:26:12.452 "claim_type": "exclusive_write", 00:26:12.452 "zoned": false, 00:26:12.452 "supported_io_types": { 00:26:12.452 "read": true, 00:26:12.452 "write": true, 00:26:12.452 "unmap": true, 00:26:12.452 "flush": true, 00:26:12.452 "reset": true, 00:26:12.452 "nvme_admin": false, 00:26:12.452 "nvme_io": false, 00:26:12.452 "nvme_io_md": false, 00:26:12.452 "write_zeroes": true, 00:26:12.452 "zcopy": true, 00:26:12.452 "get_zone_info": false, 00:26:12.452 "zone_management": false, 00:26:12.452 "zone_append": false, 00:26:12.452 "compare": false, 00:26:12.452 "compare_and_write": false, 00:26:12.452 "abort": true, 00:26:12.452 "seek_hole": false, 00:26:12.452 "seek_data": false, 00:26:12.452 "copy": true, 00:26:12.452 "nvme_iov_md": false 00:26:12.452 }, 00:26:12.452 "memory_domains": [ 00:26:12.452 { 00:26:12.452 "dma_device_id": "system", 00:26:12.452 "dma_device_type": 1 00:26:12.452 }, 00:26:12.452 { 00:26:12.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.452 "dma_device_type": 2 00:26:12.452 } 00:26:12.452 ], 00:26:12.452 "driver_specific": {} 00:26:12.452 } 00:26:12.452 ] 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.452 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.711 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.711 "name": "Existed_Raid", 00:26:12.711 "uuid": "8ad0f10e-e32a-4f96-a9e5-b13b18cce51c", 00:26:12.711 "strip_size_kb": 0, 00:26:12.711 "state": "configuring", 00:26:12.711 "raid_level": "raid1", 00:26:12.711 "superblock": true, 00:26:12.711 "num_base_bdevs": 4, 00:26:12.711 "num_base_bdevs_discovered": 2, 00:26:12.711 "num_base_bdevs_operational": 4, 00:26:12.711 "base_bdevs_list": [ 00:26:12.711 { 00:26:12.711 "name": "BaseBdev1", 00:26:12.711 "uuid": "1aebd095-5dc5-447a-8794-82be282fc587", 00:26:12.711 "is_configured": true, 00:26:12.711 "data_offset": 2048, 00:26:12.711 "data_size": 63488 00:26:12.711 }, 00:26:12.711 { 00:26:12.711 "name": "BaseBdev2", 00:26:12.711 "uuid": "d1635b81-e787-4b35-8383-da761731e036", 00:26:12.711 "is_configured": true, 00:26:12.711 "data_offset": 2048, 00:26:12.711 "data_size": 63488 00:26:12.711 }, 00:26:12.711 { 00:26:12.711 "name": "BaseBdev3", 00:26:12.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.711 "is_configured": false, 00:26:12.711 "data_offset": 0, 00:26:12.711 "data_size": 0 00:26:12.711 }, 00:26:12.711 { 00:26:12.711 "name": "BaseBdev4", 00:26:12.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.711 "is_configured": false, 00:26:12.711 "data_offset": 0, 00:26:12.711 "data_size": 0 00:26:12.711 } 00:26:12.711 ] 00:26:12.711 }' 00:26:12.711 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.711 11:37:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:13.277 11:37:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:13.535 [2024-07-13 11:37:48.185450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:13.535 BaseBdev3 00:26:13.535 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:13.535 11:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:13.535 11:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:13.535 11:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:13.535 11:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:13.535 11:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:13.535 11:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:13.793 11:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:14.052 [ 00:26:14.052 { 00:26:14.052 "name": "BaseBdev3", 00:26:14.052 "aliases": [ 00:26:14.052 "a93a5c80-7343-469a-8071-a8cda7927752" 00:26:14.052 ], 00:26:14.052 "product_name": "Malloc disk", 00:26:14.052 "block_size": 512, 00:26:14.052 "num_blocks": 65536, 00:26:14.052 "uuid": "a93a5c80-7343-469a-8071-a8cda7927752", 00:26:14.052 "assigned_rate_limits": { 00:26:14.052 "rw_ios_per_sec": 0, 00:26:14.052 "rw_mbytes_per_sec": 0, 00:26:14.052 "r_mbytes_per_sec": 0, 00:26:14.052 "w_mbytes_per_sec": 0 00:26:14.052 }, 00:26:14.052 "claimed": true, 00:26:14.052 "claim_type": "exclusive_write", 00:26:14.052 "zoned": false, 00:26:14.052 "supported_io_types": { 00:26:14.052 "read": true, 00:26:14.052 "write": true, 00:26:14.052 "unmap": true, 00:26:14.052 "flush": true, 00:26:14.052 "reset": true, 00:26:14.052 "nvme_admin": false, 00:26:14.052 "nvme_io": false, 00:26:14.052 "nvme_io_md": false, 00:26:14.052 "write_zeroes": true, 00:26:14.052 "zcopy": true, 00:26:14.052 "get_zone_info": false, 00:26:14.052 "zone_management": false, 00:26:14.052 "zone_append": false, 00:26:14.052 "compare": false, 00:26:14.052 "compare_and_write": false, 00:26:14.052 "abort": true, 00:26:14.052 "seek_hole": false, 00:26:14.052 "seek_data": false, 00:26:14.052 "copy": true, 00:26:14.052 "nvme_iov_md": false 00:26:14.052 }, 00:26:14.052 "memory_domains": [ 00:26:14.052 { 00:26:14.052 "dma_device_id": "system", 00:26:14.052 "dma_device_type": 1 00:26:14.052 }, 00:26:14.052 { 00:26:14.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.052 "dma_device_type": 2 00:26:14.052 } 00:26:14.052 ], 00:26:14.052 "driver_specific": {} 00:26:14.052 } 00:26:14.052 ] 00:26:14.052 11:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:14.052 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:14.052 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:14.052 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:14.052 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.053 "name": "Existed_Raid", 00:26:14.053 "uuid": "8ad0f10e-e32a-4f96-a9e5-b13b18cce51c", 00:26:14.053 "strip_size_kb": 0, 00:26:14.053 "state": "configuring", 00:26:14.053 "raid_level": "raid1", 00:26:14.053 "superblock": true, 00:26:14.053 "num_base_bdevs": 4, 00:26:14.053 "num_base_bdevs_discovered": 3, 00:26:14.053 "num_base_bdevs_operational": 4, 00:26:14.053 "base_bdevs_list": [ 00:26:14.053 { 00:26:14.053 "name": "BaseBdev1", 00:26:14.053 "uuid": "1aebd095-5dc5-447a-8794-82be282fc587", 00:26:14.053 "is_configured": true, 00:26:14.053 "data_offset": 2048, 00:26:14.053 "data_size": 63488 00:26:14.053 }, 00:26:14.053 { 00:26:14.053 "name": "BaseBdev2", 00:26:14.053 "uuid": "d1635b81-e787-4b35-8383-da761731e036", 00:26:14.053 "is_configured": true, 00:26:14.053 "data_offset": 2048, 00:26:14.053 "data_size": 63488 00:26:14.053 }, 00:26:14.053 { 00:26:14.053 "name": "BaseBdev3", 00:26:14.053 "uuid": "a93a5c80-7343-469a-8071-a8cda7927752", 00:26:14.053 "is_configured": true, 00:26:14.053 "data_offset": 2048, 00:26:14.053 "data_size": 63488 00:26:14.053 }, 00:26:14.053 { 00:26:14.053 "name": "BaseBdev4", 00:26:14.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:14.053 "is_configured": false, 00:26:14.053 "data_offset": 0, 00:26:14.053 "data_size": 0 00:26:14.053 } 00:26:14.053 ] 00:26:14.053 }' 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.053 11:37:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 11:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:14.988 [2024-07-13 11:37:49.687312] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:14.988 [2024-07-13 11:37:49.687863] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:26:14.988 [2024-07-13 11:37:49.688016] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:14.988 BaseBdev4 00:26:14.988 [2024-07-13 11:37:49.688195] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:14.988 [2024-07-13 11:37:49.688644] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:26:14.988 [2024-07-13 11:37:49.688806] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:26:14.988 [2024-07-13 11:37:49.689042] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.988 11:37:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:14.988 11:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:14.988 11:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:14.988 11:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:14.988 11:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:14.988 11:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:14.988 11:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:15.246 11:37:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:15.504 [ 00:26:15.504 { 00:26:15.504 "name": "BaseBdev4", 00:26:15.504 "aliases": [ 00:26:15.504 "5e679229-b3ba-423d-ac4b-b80c13cfcc56" 00:26:15.504 ], 00:26:15.504 "product_name": "Malloc disk", 00:26:15.504 "block_size": 512, 00:26:15.504 "num_blocks": 65536, 00:26:15.504 "uuid": "5e679229-b3ba-423d-ac4b-b80c13cfcc56", 00:26:15.504 "assigned_rate_limits": { 00:26:15.505 "rw_ios_per_sec": 0, 00:26:15.505 "rw_mbytes_per_sec": 0, 00:26:15.505 "r_mbytes_per_sec": 0, 00:26:15.505 "w_mbytes_per_sec": 0 00:26:15.505 }, 00:26:15.505 "claimed": true, 00:26:15.505 "claim_type": "exclusive_write", 00:26:15.505 "zoned": false, 00:26:15.505 "supported_io_types": { 00:26:15.505 "read": true, 00:26:15.505 "write": true, 00:26:15.505 "unmap": true, 00:26:15.505 "flush": true, 00:26:15.505 "reset": true, 00:26:15.505 "nvme_admin": false, 00:26:15.505 "nvme_io": false, 00:26:15.505 "nvme_io_md": false, 00:26:15.505 "write_zeroes": true, 00:26:15.505 "zcopy": true, 00:26:15.505 "get_zone_info": false, 00:26:15.505 "zone_management": false, 00:26:15.505 "zone_append": false, 00:26:15.505 "compare": false, 00:26:15.505 "compare_and_write": false, 00:26:15.505 "abort": true, 00:26:15.505 "seek_hole": false, 00:26:15.505 "seek_data": false, 00:26:15.505 "copy": true, 00:26:15.505 "nvme_iov_md": false 00:26:15.505 }, 00:26:15.505 "memory_domains": [ 00:26:15.505 { 00:26:15.505 "dma_device_id": "system", 00:26:15.505 "dma_device_type": 1 00:26:15.505 }, 00:26:15.505 { 00:26:15.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.505 "dma_device_type": 2 00:26:15.505 } 00:26:15.505 ], 00:26:15.505 "driver_specific": {} 00:26:15.505 } 00:26:15.505 ] 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.505 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.763 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.763 "name": "Existed_Raid", 00:26:15.763 "uuid": "8ad0f10e-e32a-4f96-a9e5-b13b18cce51c", 00:26:15.763 "strip_size_kb": 0, 00:26:15.763 "state": "online", 00:26:15.763 "raid_level": "raid1", 00:26:15.763 "superblock": true, 00:26:15.763 "num_base_bdevs": 4, 00:26:15.763 "num_base_bdevs_discovered": 4, 00:26:15.763 "num_base_bdevs_operational": 4, 00:26:15.763 "base_bdevs_list": [ 00:26:15.763 { 00:26:15.763 "name": "BaseBdev1", 00:26:15.763 "uuid": "1aebd095-5dc5-447a-8794-82be282fc587", 00:26:15.763 "is_configured": true, 00:26:15.763 "data_offset": 2048, 00:26:15.763 "data_size": 63488 00:26:15.763 }, 00:26:15.763 { 00:26:15.763 "name": "BaseBdev2", 00:26:15.763 "uuid": "d1635b81-e787-4b35-8383-da761731e036", 00:26:15.763 "is_configured": true, 00:26:15.763 "data_offset": 2048, 00:26:15.763 "data_size": 63488 00:26:15.763 }, 00:26:15.763 { 00:26:15.763 "name": "BaseBdev3", 00:26:15.763 "uuid": "a93a5c80-7343-469a-8071-a8cda7927752", 00:26:15.763 "is_configured": true, 00:26:15.763 "data_offset": 2048, 00:26:15.763 "data_size": 63488 00:26:15.763 }, 00:26:15.763 { 00:26:15.763 "name": "BaseBdev4", 00:26:15.763 "uuid": "5e679229-b3ba-423d-ac4b-b80c13cfcc56", 00:26:15.763 "is_configured": true, 00:26:15.763 "data_offset": 2048, 00:26:15.763 "data_size": 63488 00:26:15.763 } 00:26:15.763 ] 00:26:15.763 }' 00:26:15.763 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.763 11:37:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.329 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:16.329 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:16.329 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:16.329 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:16.329 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:16.329 11:37:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:16.329 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:16.329 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:16.587 [2024-07-13 11:37:51.191909] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:16.587 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:16.587 "name": "Existed_Raid", 00:26:16.587 "aliases": [ 00:26:16.587 "8ad0f10e-e32a-4f96-a9e5-b13b18cce51c" 00:26:16.587 ], 00:26:16.587 "product_name": "Raid Volume", 00:26:16.587 "block_size": 512, 00:26:16.587 "num_blocks": 63488, 00:26:16.587 "uuid": "8ad0f10e-e32a-4f96-a9e5-b13b18cce51c", 00:26:16.587 "assigned_rate_limits": { 00:26:16.587 "rw_ios_per_sec": 0, 00:26:16.587 "rw_mbytes_per_sec": 0, 00:26:16.587 "r_mbytes_per_sec": 0, 00:26:16.587 "w_mbytes_per_sec": 0 00:26:16.587 }, 00:26:16.587 "claimed": false, 00:26:16.587 "zoned": false, 00:26:16.587 "supported_io_types": { 00:26:16.587 "read": true, 00:26:16.587 "write": true, 00:26:16.587 "unmap": false, 00:26:16.587 "flush": false, 00:26:16.587 "reset": true, 00:26:16.587 "nvme_admin": false, 00:26:16.587 "nvme_io": false, 00:26:16.587 "nvme_io_md": false, 00:26:16.587 "write_zeroes": true, 00:26:16.587 "zcopy": false, 00:26:16.587 "get_zone_info": false, 00:26:16.587 "zone_management": false, 00:26:16.587 "zone_append": false, 00:26:16.587 "compare": false, 00:26:16.587 "compare_and_write": false, 00:26:16.587 "abort": false, 00:26:16.587 "seek_hole": false, 00:26:16.587 "seek_data": false, 00:26:16.587 "copy": false, 00:26:16.587 "nvme_iov_md": false 00:26:16.587 }, 00:26:16.587 "memory_domains": [ 00:26:16.587 { 00:26:16.587 "dma_device_id": "system", 00:26:16.587 "dma_device_type": 1 00:26:16.587 }, 00:26:16.587 { 00:26:16.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.587 "dma_device_type": 2 00:26:16.587 }, 00:26:16.587 { 00:26:16.587 "dma_device_id": "system", 00:26:16.587 "dma_device_type": 1 00:26:16.587 }, 00:26:16.587 { 00:26:16.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.587 "dma_device_type": 2 00:26:16.587 }, 00:26:16.587 { 00:26:16.587 "dma_device_id": "system", 00:26:16.587 "dma_device_type": 1 00:26:16.587 }, 00:26:16.587 { 00:26:16.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.587 "dma_device_type": 2 00:26:16.587 }, 00:26:16.587 { 00:26:16.587 "dma_device_id": "system", 00:26:16.587 "dma_device_type": 1 00:26:16.587 }, 00:26:16.587 { 00:26:16.587 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.587 "dma_device_type": 2 00:26:16.587 } 00:26:16.587 ], 00:26:16.587 "driver_specific": { 00:26:16.587 "raid": { 00:26:16.587 "uuid": "8ad0f10e-e32a-4f96-a9e5-b13b18cce51c", 00:26:16.587 "strip_size_kb": 0, 00:26:16.587 "state": "online", 00:26:16.587 "raid_level": "raid1", 00:26:16.587 "superblock": true, 00:26:16.587 "num_base_bdevs": 4, 00:26:16.587 "num_base_bdevs_discovered": 4, 00:26:16.587 "num_base_bdevs_operational": 4, 00:26:16.588 "base_bdevs_list": [ 00:26:16.588 { 00:26:16.588 "name": "BaseBdev1", 00:26:16.588 "uuid": "1aebd095-5dc5-447a-8794-82be282fc587", 00:26:16.588 "is_configured": true, 00:26:16.588 "data_offset": 2048, 00:26:16.588 "data_size": 63488 00:26:16.588 }, 00:26:16.588 { 00:26:16.588 "name": "BaseBdev2", 00:26:16.588 "uuid": "d1635b81-e787-4b35-8383-da761731e036", 00:26:16.588 "is_configured": true, 00:26:16.588 "data_offset": 2048, 00:26:16.588 "data_size": 63488 00:26:16.588 }, 00:26:16.588 { 00:26:16.588 "name": "BaseBdev3", 00:26:16.588 "uuid": "a93a5c80-7343-469a-8071-a8cda7927752", 00:26:16.588 "is_configured": true, 00:26:16.588 "data_offset": 2048, 00:26:16.588 "data_size": 63488 00:26:16.588 }, 00:26:16.588 { 00:26:16.588 "name": "BaseBdev4", 00:26:16.588 "uuid": "5e679229-b3ba-423d-ac4b-b80c13cfcc56", 00:26:16.588 "is_configured": true, 00:26:16.588 "data_offset": 2048, 00:26:16.588 "data_size": 63488 00:26:16.588 } 00:26:16.588 ] 00:26:16.588 } 00:26:16.588 } 00:26:16.588 }' 00:26:16.588 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:16.588 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:16.588 BaseBdev2 00:26:16.588 BaseBdev3 00:26:16.588 BaseBdev4' 00:26:16.588 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:16.588 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:16.588 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:16.846 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:16.846 "name": "BaseBdev1", 00:26:16.846 "aliases": [ 00:26:16.846 "1aebd095-5dc5-447a-8794-82be282fc587" 00:26:16.846 ], 00:26:16.846 "product_name": "Malloc disk", 00:26:16.846 "block_size": 512, 00:26:16.846 "num_blocks": 65536, 00:26:16.846 "uuid": "1aebd095-5dc5-447a-8794-82be282fc587", 00:26:16.846 "assigned_rate_limits": { 00:26:16.846 "rw_ios_per_sec": 0, 00:26:16.846 "rw_mbytes_per_sec": 0, 00:26:16.846 "r_mbytes_per_sec": 0, 00:26:16.846 "w_mbytes_per_sec": 0 00:26:16.846 }, 00:26:16.846 "claimed": true, 00:26:16.846 "claim_type": "exclusive_write", 00:26:16.846 "zoned": false, 00:26:16.846 "supported_io_types": { 00:26:16.846 "read": true, 00:26:16.846 "write": true, 00:26:16.846 "unmap": true, 00:26:16.846 "flush": true, 00:26:16.846 "reset": true, 00:26:16.846 "nvme_admin": false, 00:26:16.846 "nvme_io": false, 00:26:16.846 "nvme_io_md": false, 00:26:16.846 "write_zeroes": true, 00:26:16.846 "zcopy": true, 00:26:16.846 "get_zone_info": false, 00:26:16.846 "zone_management": false, 00:26:16.846 "zone_append": false, 00:26:16.846 "compare": false, 00:26:16.846 "compare_and_write": false, 00:26:16.846 "abort": true, 00:26:16.846 "seek_hole": false, 00:26:16.846 "seek_data": false, 00:26:16.846 "copy": true, 00:26:16.846 "nvme_iov_md": false 00:26:16.846 }, 00:26:16.846 "memory_domains": [ 00:26:16.846 { 00:26:16.846 "dma_device_id": "system", 00:26:16.846 "dma_device_type": 1 00:26:16.846 }, 00:26:16.846 { 00:26:16.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.846 "dma_device_type": 2 00:26:16.846 } 00:26:16.846 ], 00:26:16.846 "driver_specific": {} 00:26:16.846 }' 00:26:16.846 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.846 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.103 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:17.103 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.103 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.103 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.103 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.103 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.104 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.104 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.360 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.360 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.360 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.360 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:17.360 11:37:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.628 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.628 "name": "BaseBdev2", 00:26:17.628 "aliases": [ 00:26:17.628 "d1635b81-e787-4b35-8383-da761731e036" 00:26:17.628 ], 00:26:17.628 "product_name": "Malloc disk", 00:26:17.628 "block_size": 512, 00:26:17.628 "num_blocks": 65536, 00:26:17.628 "uuid": "d1635b81-e787-4b35-8383-da761731e036", 00:26:17.628 "assigned_rate_limits": { 00:26:17.628 "rw_ios_per_sec": 0, 00:26:17.628 "rw_mbytes_per_sec": 0, 00:26:17.628 "r_mbytes_per_sec": 0, 00:26:17.628 "w_mbytes_per_sec": 0 00:26:17.628 }, 00:26:17.628 "claimed": true, 00:26:17.628 "claim_type": "exclusive_write", 00:26:17.628 "zoned": false, 00:26:17.628 "supported_io_types": { 00:26:17.628 "read": true, 00:26:17.628 "write": true, 00:26:17.628 "unmap": true, 00:26:17.628 "flush": true, 00:26:17.628 "reset": true, 00:26:17.628 "nvme_admin": false, 00:26:17.628 "nvme_io": false, 00:26:17.628 "nvme_io_md": false, 00:26:17.628 "write_zeroes": true, 00:26:17.628 "zcopy": true, 00:26:17.628 "get_zone_info": false, 00:26:17.628 "zone_management": false, 00:26:17.628 "zone_append": false, 00:26:17.628 "compare": false, 00:26:17.628 "compare_and_write": false, 00:26:17.628 "abort": true, 00:26:17.628 "seek_hole": false, 00:26:17.628 "seek_data": false, 00:26:17.628 "copy": true, 00:26:17.628 "nvme_iov_md": false 00:26:17.628 }, 00:26:17.628 "memory_domains": [ 00:26:17.628 { 00:26:17.628 "dma_device_id": "system", 00:26:17.628 "dma_device_type": 1 00:26:17.628 }, 00:26:17.628 { 00:26:17.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.628 "dma_device_type": 2 00:26:17.628 } 00:26:17.628 ], 00:26:17.628 "driver_specific": {} 00:26:17.628 }' 00:26:17.628 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.628 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.628 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:17.628 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.628 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.628 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.628 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.891 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.891 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.891 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.891 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.891 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.891 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.891 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:17.891 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.148 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.148 "name": "BaseBdev3", 00:26:18.148 "aliases": [ 00:26:18.148 "a93a5c80-7343-469a-8071-a8cda7927752" 00:26:18.148 ], 00:26:18.148 "product_name": "Malloc disk", 00:26:18.148 "block_size": 512, 00:26:18.148 "num_blocks": 65536, 00:26:18.148 "uuid": "a93a5c80-7343-469a-8071-a8cda7927752", 00:26:18.148 "assigned_rate_limits": { 00:26:18.148 "rw_ios_per_sec": 0, 00:26:18.148 "rw_mbytes_per_sec": 0, 00:26:18.148 "r_mbytes_per_sec": 0, 00:26:18.148 "w_mbytes_per_sec": 0 00:26:18.148 }, 00:26:18.148 "claimed": true, 00:26:18.148 "claim_type": "exclusive_write", 00:26:18.148 "zoned": false, 00:26:18.148 "supported_io_types": { 00:26:18.148 "read": true, 00:26:18.148 "write": true, 00:26:18.148 "unmap": true, 00:26:18.148 "flush": true, 00:26:18.148 "reset": true, 00:26:18.149 "nvme_admin": false, 00:26:18.149 "nvme_io": false, 00:26:18.149 "nvme_io_md": false, 00:26:18.149 "write_zeroes": true, 00:26:18.149 "zcopy": true, 00:26:18.149 "get_zone_info": false, 00:26:18.149 "zone_management": false, 00:26:18.149 "zone_append": false, 00:26:18.149 "compare": false, 00:26:18.149 "compare_and_write": false, 00:26:18.149 "abort": true, 00:26:18.149 "seek_hole": false, 00:26:18.149 "seek_data": false, 00:26:18.149 "copy": true, 00:26:18.149 "nvme_iov_md": false 00:26:18.149 }, 00:26:18.149 "memory_domains": [ 00:26:18.149 { 00:26:18.149 "dma_device_id": "system", 00:26:18.149 "dma_device_type": 1 00:26:18.149 }, 00:26:18.149 { 00:26:18.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.149 "dma_device_type": 2 00:26:18.149 } 00:26:18.149 ], 00:26:18.149 "driver_specific": {} 00:26:18.149 }' 00:26:18.149 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.406 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.406 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.406 11:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.406 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.406 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.406 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.406 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.662 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.662 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.662 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.662 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.662 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.662 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:18.662 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.920 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.920 "name": "BaseBdev4", 00:26:18.920 "aliases": [ 00:26:18.920 "5e679229-b3ba-423d-ac4b-b80c13cfcc56" 00:26:18.920 ], 00:26:18.920 "product_name": "Malloc disk", 00:26:18.920 "block_size": 512, 00:26:18.920 "num_blocks": 65536, 00:26:18.920 "uuid": "5e679229-b3ba-423d-ac4b-b80c13cfcc56", 00:26:18.920 "assigned_rate_limits": { 00:26:18.920 "rw_ios_per_sec": 0, 00:26:18.920 "rw_mbytes_per_sec": 0, 00:26:18.920 "r_mbytes_per_sec": 0, 00:26:18.920 "w_mbytes_per_sec": 0 00:26:18.920 }, 00:26:18.920 "claimed": true, 00:26:18.920 "claim_type": "exclusive_write", 00:26:18.920 "zoned": false, 00:26:18.920 "supported_io_types": { 00:26:18.920 "read": true, 00:26:18.920 "write": true, 00:26:18.920 "unmap": true, 00:26:18.920 "flush": true, 00:26:18.920 "reset": true, 00:26:18.920 "nvme_admin": false, 00:26:18.920 "nvme_io": false, 00:26:18.920 "nvme_io_md": false, 00:26:18.920 "write_zeroes": true, 00:26:18.920 "zcopy": true, 00:26:18.920 "get_zone_info": false, 00:26:18.920 "zone_management": false, 00:26:18.920 "zone_append": false, 00:26:18.920 "compare": false, 00:26:18.920 "compare_and_write": false, 00:26:18.920 "abort": true, 00:26:18.920 "seek_hole": false, 00:26:18.920 "seek_data": false, 00:26:18.920 "copy": true, 00:26:18.920 "nvme_iov_md": false 00:26:18.920 }, 00:26:18.920 "memory_domains": [ 00:26:18.920 { 00:26:18.920 "dma_device_id": "system", 00:26:18.920 "dma_device_type": 1 00:26:18.920 }, 00:26:18.920 { 00:26:18.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.920 "dma_device_type": 2 00:26:18.920 } 00:26:18.920 ], 00:26:18.920 "driver_specific": {} 00:26:18.920 }' 00:26:18.920 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.920 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:19.177 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:19.177 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.177 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.177 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:19.177 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.177 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.177 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:19.177 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.435 11:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.435 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:19.435 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:19.693 [2024-07-13 11:37:54.276432] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.693 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.694 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.694 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.694 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.952 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:19.952 "name": "Existed_Raid", 00:26:19.952 "uuid": "8ad0f10e-e32a-4f96-a9e5-b13b18cce51c", 00:26:19.952 "strip_size_kb": 0, 00:26:19.952 "state": "online", 00:26:19.952 "raid_level": "raid1", 00:26:19.952 "superblock": true, 00:26:19.952 "num_base_bdevs": 4, 00:26:19.952 "num_base_bdevs_discovered": 3, 00:26:19.952 "num_base_bdevs_operational": 3, 00:26:19.952 "base_bdevs_list": [ 00:26:19.952 { 00:26:19.952 "name": null, 00:26:19.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.952 "is_configured": false, 00:26:19.952 "data_offset": 2048, 00:26:19.952 "data_size": 63488 00:26:19.952 }, 00:26:19.952 { 00:26:19.952 "name": "BaseBdev2", 00:26:19.952 "uuid": "d1635b81-e787-4b35-8383-da761731e036", 00:26:19.952 "is_configured": true, 00:26:19.952 "data_offset": 2048, 00:26:19.952 "data_size": 63488 00:26:19.952 }, 00:26:19.952 { 00:26:19.952 "name": "BaseBdev3", 00:26:19.952 "uuid": "a93a5c80-7343-469a-8071-a8cda7927752", 00:26:19.952 "is_configured": true, 00:26:19.952 "data_offset": 2048, 00:26:19.952 "data_size": 63488 00:26:19.952 }, 00:26:19.952 { 00:26:19.952 "name": "BaseBdev4", 00:26:19.952 "uuid": "5e679229-b3ba-423d-ac4b-b80c13cfcc56", 00:26:19.952 "is_configured": true, 00:26:19.952 "data_offset": 2048, 00:26:19.952 "data_size": 63488 00:26:19.952 } 00:26:19.952 ] 00:26:19.952 }' 00:26:19.952 11:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:19.952 11:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.540 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:20.540 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:20.540 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.798 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:21.055 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:21.055 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:21.055 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:21.055 [2024-07-13 11:37:55.736781] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:21.313 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:21.313 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:21.313 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.313 11:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:21.571 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:21.571 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:21.571 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:21.829 [2024-07-13 11:37:56.352051] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:21.829 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:21.829 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:21.829 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.829 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:22.087 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:22.087 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:22.087 11:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:22.345 [2024-07-13 11:37:56.949791] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:22.345 [2024-07-13 11:37:56.950042] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.345 [2024-07-13 11:37:57.014305] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.345 [2024-07-13 11:37:57.014563] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:22.345 [2024-07-13 11:37:57.014679] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:26:22.345 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:22.345 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:22.345 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:22.345 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.603 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:22.603 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:22.603 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:22.603 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:22.603 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:22.603 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:22.859 BaseBdev2 00:26:22.859 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:22.859 11:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:22.859 11:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:22.859 11:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:22.859 11:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:22.859 11:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:22.859 11:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:23.117 11:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:23.374 [ 00:26:23.374 { 00:26:23.374 "name": "BaseBdev2", 00:26:23.374 "aliases": [ 00:26:23.374 "a7a3f386-3c18-4eba-a86e-1c0b2fd36945" 00:26:23.374 ], 00:26:23.374 "product_name": "Malloc disk", 00:26:23.374 "block_size": 512, 00:26:23.374 "num_blocks": 65536, 00:26:23.374 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:23.374 "assigned_rate_limits": { 00:26:23.374 "rw_ios_per_sec": 0, 00:26:23.374 "rw_mbytes_per_sec": 0, 00:26:23.374 "r_mbytes_per_sec": 0, 00:26:23.374 "w_mbytes_per_sec": 0 00:26:23.374 }, 00:26:23.374 "claimed": false, 00:26:23.374 "zoned": false, 00:26:23.374 "supported_io_types": { 00:26:23.374 "read": true, 00:26:23.374 "write": true, 00:26:23.374 "unmap": true, 00:26:23.374 "flush": true, 00:26:23.374 "reset": true, 00:26:23.374 "nvme_admin": false, 00:26:23.374 "nvme_io": false, 00:26:23.374 "nvme_io_md": false, 00:26:23.374 "write_zeroes": true, 00:26:23.374 "zcopy": true, 00:26:23.374 "get_zone_info": false, 00:26:23.374 "zone_management": false, 00:26:23.374 "zone_append": false, 00:26:23.374 "compare": false, 00:26:23.374 "compare_and_write": false, 00:26:23.374 "abort": true, 00:26:23.374 "seek_hole": false, 00:26:23.374 "seek_data": false, 00:26:23.374 "copy": true, 00:26:23.374 "nvme_iov_md": false 00:26:23.374 }, 00:26:23.374 "memory_domains": [ 00:26:23.374 { 00:26:23.374 "dma_device_id": "system", 00:26:23.374 "dma_device_type": 1 00:26:23.374 }, 00:26:23.374 { 00:26:23.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.374 "dma_device_type": 2 00:26:23.374 } 00:26:23.374 ], 00:26:23.374 "driver_specific": {} 00:26:23.374 } 00:26:23.374 ] 00:26:23.374 11:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:23.374 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:23.374 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:23.374 11:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:23.632 BaseBdev3 00:26:23.632 11:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:23.632 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:23.632 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:23.632 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:23.632 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:23.632 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:23.632 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:23.890 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:24.148 [ 00:26:24.148 { 00:26:24.148 "name": "BaseBdev3", 00:26:24.148 "aliases": [ 00:26:24.148 "db6a8b0b-f306-41ce-89d2-076c4badf597" 00:26:24.148 ], 00:26:24.148 "product_name": "Malloc disk", 00:26:24.148 "block_size": 512, 00:26:24.148 "num_blocks": 65536, 00:26:24.148 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:24.148 "assigned_rate_limits": { 00:26:24.148 "rw_ios_per_sec": 0, 00:26:24.148 "rw_mbytes_per_sec": 0, 00:26:24.148 "r_mbytes_per_sec": 0, 00:26:24.148 "w_mbytes_per_sec": 0 00:26:24.148 }, 00:26:24.148 "claimed": false, 00:26:24.148 "zoned": false, 00:26:24.148 "supported_io_types": { 00:26:24.148 "read": true, 00:26:24.148 "write": true, 00:26:24.148 "unmap": true, 00:26:24.148 "flush": true, 00:26:24.148 "reset": true, 00:26:24.148 "nvme_admin": false, 00:26:24.148 "nvme_io": false, 00:26:24.148 "nvme_io_md": false, 00:26:24.148 "write_zeroes": true, 00:26:24.148 "zcopy": true, 00:26:24.148 "get_zone_info": false, 00:26:24.148 "zone_management": false, 00:26:24.148 "zone_append": false, 00:26:24.148 "compare": false, 00:26:24.148 "compare_and_write": false, 00:26:24.148 "abort": true, 00:26:24.148 "seek_hole": false, 00:26:24.148 "seek_data": false, 00:26:24.148 "copy": true, 00:26:24.148 "nvme_iov_md": false 00:26:24.148 }, 00:26:24.148 "memory_domains": [ 00:26:24.148 { 00:26:24.149 "dma_device_id": "system", 00:26:24.149 "dma_device_type": 1 00:26:24.149 }, 00:26:24.149 { 00:26:24.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.149 "dma_device_type": 2 00:26:24.149 } 00:26:24.149 ], 00:26:24.149 "driver_specific": {} 00:26:24.149 } 00:26:24.149 ] 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:24.149 BaseBdev4 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:24.149 11:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:24.407 11:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:24.666 [ 00:26:24.666 { 00:26:24.666 "name": "BaseBdev4", 00:26:24.666 "aliases": [ 00:26:24.666 "cabaa99f-9bed-459c-8177-e984bff7d912" 00:26:24.666 ], 00:26:24.666 "product_name": "Malloc disk", 00:26:24.666 "block_size": 512, 00:26:24.666 "num_blocks": 65536, 00:26:24.666 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:24.666 "assigned_rate_limits": { 00:26:24.666 "rw_ios_per_sec": 0, 00:26:24.666 "rw_mbytes_per_sec": 0, 00:26:24.666 "r_mbytes_per_sec": 0, 00:26:24.666 "w_mbytes_per_sec": 0 00:26:24.666 }, 00:26:24.666 "claimed": false, 00:26:24.666 "zoned": false, 00:26:24.666 "supported_io_types": { 00:26:24.666 "read": true, 00:26:24.666 "write": true, 00:26:24.666 "unmap": true, 00:26:24.666 "flush": true, 00:26:24.666 "reset": true, 00:26:24.666 "nvme_admin": false, 00:26:24.666 "nvme_io": false, 00:26:24.666 "nvme_io_md": false, 00:26:24.666 "write_zeroes": true, 00:26:24.666 "zcopy": true, 00:26:24.666 "get_zone_info": false, 00:26:24.666 "zone_management": false, 00:26:24.666 "zone_append": false, 00:26:24.666 "compare": false, 00:26:24.666 "compare_and_write": false, 00:26:24.666 "abort": true, 00:26:24.666 "seek_hole": false, 00:26:24.666 "seek_data": false, 00:26:24.666 "copy": true, 00:26:24.666 "nvme_iov_md": false 00:26:24.666 }, 00:26:24.666 "memory_domains": [ 00:26:24.666 { 00:26:24.666 "dma_device_id": "system", 00:26:24.666 "dma_device_type": 1 00:26:24.666 }, 00:26:24.666 { 00:26:24.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.666 "dma_device_type": 2 00:26:24.666 } 00:26:24.666 ], 00:26:24.666 "driver_specific": {} 00:26:24.666 } 00:26:24.666 ] 00:26:24.666 11:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:24.666 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:24.666 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:24.666 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:24.924 [2024-07-13 11:37:59.486486] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:24.924 [2024-07-13 11:37:59.486691] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:24.924 [2024-07-13 11:37:59.486809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:24.924 [2024-07-13 11:37:59.488733] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:24.924 [2024-07-13 11:37:59.488966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:24.924 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.183 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.183 "name": "Existed_Raid", 00:26:25.183 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:25.183 "strip_size_kb": 0, 00:26:25.183 "state": "configuring", 00:26:25.183 "raid_level": "raid1", 00:26:25.183 "superblock": true, 00:26:25.183 "num_base_bdevs": 4, 00:26:25.183 "num_base_bdevs_discovered": 3, 00:26:25.183 "num_base_bdevs_operational": 4, 00:26:25.183 "base_bdevs_list": [ 00:26:25.183 { 00:26:25.183 "name": "BaseBdev1", 00:26:25.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.183 "is_configured": false, 00:26:25.183 "data_offset": 0, 00:26:25.183 "data_size": 0 00:26:25.183 }, 00:26:25.183 { 00:26:25.183 "name": "BaseBdev2", 00:26:25.183 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:25.183 "is_configured": true, 00:26:25.183 "data_offset": 2048, 00:26:25.183 "data_size": 63488 00:26:25.183 }, 00:26:25.183 { 00:26:25.183 "name": "BaseBdev3", 00:26:25.183 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:25.183 "is_configured": true, 00:26:25.183 "data_offset": 2048, 00:26:25.183 "data_size": 63488 00:26:25.183 }, 00:26:25.183 { 00:26:25.183 "name": "BaseBdev4", 00:26:25.183 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:25.183 "is_configured": true, 00:26:25.183 "data_offset": 2048, 00:26:25.183 "data_size": 63488 00:26:25.183 } 00:26:25.183 ] 00:26:25.183 }' 00:26:25.183 11:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.183 11:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:25.749 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:26.008 [2024-07-13 11:38:00.650658] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.008 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.267 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:26.267 "name": "Existed_Raid", 00:26:26.267 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:26.267 "strip_size_kb": 0, 00:26:26.267 "state": "configuring", 00:26:26.267 "raid_level": "raid1", 00:26:26.267 "superblock": true, 00:26:26.267 "num_base_bdevs": 4, 00:26:26.267 "num_base_bdevs_discovered": 2, 00:26:26.267 "num_base_bdevs_operational": 4, 00:26:26.267 "base_bdevs_list": [ 00:26:26.267 { 00:26:26.267 "name": "BaseBdev1", 00:26:26.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.267 "is_configured": false, 00:26:26.267 "data_offset": 0, 00:26:26.267 "data_size": 0 00:26:26.267 }, 00:26:26.267 { 00:26:26.267 "name": null, 00:26:26.267 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:26.267 "is_configured": false, 00:26:26.267 "data_offset": 2048, 00:26:26.267 "data_size": 63488 00:26:26.267 }, 00:26:26.267 { 00:26:26.267 "name": "BaseBdev3", 00:26:26.267 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:26.267 "is_configured": true, 00:26:26.267 "data_offset": 2048, 00:26:26.267 "data_size": 63488 00:26:26.267 }, 00:26:26.267 { 00:26:26.267 "name": "BaseBdev4", 00:26:26.267 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:26.267 "is_configured": true, 00:26:26.267 "data_offset": 2048, 00:26:26.267 "data_size": 63488 00:26:26.267 } 00:26:26.267 ] 00:26:26.267 }' 00:26:26.267 11:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:26.267 11:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.834 11:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.834 11:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:27.092 11:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:27.092 11:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:27.350 [2024-07-13 11:38:02.000354] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:27.350 BaseBdev1 00:26:27.350 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:27.350 11:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:27.350 11:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:27.350 11:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:27.350 11:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:27.350 11:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:27.350 11:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:27.608 11:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:27.867 [ 00:26:27.867 { 00:26:27.867 "name": "BaseBdev1", 00:26:27.867 "aliases": [ 00:26:27.867 "32061674-9fbc-4310-8a09-c86381202a6f" 00:26:27.867 ], 00:26:27.867 "product_name": "Malloc disk", 00:26:27.867 "block_size": 512, 00:26:27.867 "num_blocks": 65536, 00:26:27.867 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:27.867 "assigned_rate_limits": { 00:26:27.867 "rw_ios_per_sec": 0, 00:26:27.867 "rw_mbytes_per_sec": 0, 00:26:27.867 "r_mbytes_per_sec": 0, 00:26:27.867 "w_mbytes_per_sec": 0 00:26:27.867 }, 00:26:27.867 "claimed": true, 00:26:27.867 "claim_type": "exclusive_write", 00:26:27.867 "zoned": false, 00:26:27.867 "supported_io_types": { 00:26:27.867 "read": true, 00:26:27.867 "write": true, 00:26:27.867 "unmap": true, 00:26:27.867 "flush": true, 00:26:27.867 "reset": true, 00:26:27.867 "nvme_admin": false, 00:26:27.867 "nvme_io": false, 00:26:27.867 "nvme_io_md": false, 00:26:27.867 "write_zeroes": true, 00:26:27.867 "zcopy": true, 00:26:27.867 "get_zone_info": false, 00:26:27.867 "zone_management": false, 00:26:27.867 "zone_append": false, 00:26:27.867 "compare": false, 00:26:27.867 "compare_and_write": false, 00:26:27.868 "abort": true, 00:26:27.868 "seek_hole": false, 00:26:27.868 "seek_data": false, 00:26:27.868 "copy": true, 00:26:27.868 "nvme_iov_md": false 00:26:27.868 }, 00:26:27.868 "memory_domains": [ 00:26:27.868 { 00:26:27.868 "dma_device_id": "system", 00:26:27.868 "dma_device_type": 1 00:26:27.868 }, 00:26:27.868 { 00:26:27.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:27.868 "dma_device_type": 2 00:26:27.868 } 00:26:27.868 ], 00:26:27.868 "driver_specific": {} 00:26:27.868 } 00:26:27.868 ] 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:27.868 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.127 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.127 "name": "Existed_Raid", 00:26:28.127 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:28.127 "strip_size_kb": 0, 00:26:28.127 "state": "configuring", 00:26:28.127 "raid_level": "raid1", 00:26:28.127 "superblock": true, 00:26:28.127 "num_base_bdevs": 4, 00:26:28.127 "num_base_bdevs_discovered": 3, 00:26:28.127 "num_base_bdevs_operational": 4, 00:26:28.127 "base_bdevs_list": [ 00:26:28.127 { 00:26:28.127 "name": "BaseBdev1", 00:26:28.127 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:28.127 "is_configured": true, 00:26:28.127 "data_offset": 2048, 00:26:28.127 "data_size": 63488 00:26:28.127 }, 00:26:28.127 { 00:26:28.127 "name": null, 00:26:28.127 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:28.127 "is_configured": false, 00:26:28.127 "data_offset": 2048, 00:26:28.127 "data_size": 63488 00:26:28.127 }, 00:26:28.127 { 00:26:28.127 "name": "BaseBdev3", 00:26:28.127 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:28.127 "is_configured": true, 00:26:28.127 "data_offset": 2048, 00:26:28.127 "data_size": 63488 00:26:28.127 }, 00:26:28.127 { 00:26:28.127 "name": "BaseBdev4", 00:26:28.127 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:28.127 "is_configured": true, 00:26:28.127 "data_offset": 2048, 00:26:28.127 "data_size": 63488 00:26:28.127 } 00:26:28.127 ] 00:26:28.127 }' 00:26:28.127 11:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.127 11:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:28.692 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.692 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:28.951 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:28.951 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:29.209 [2024-07-13 11:38:03.732701] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:29.209 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:29.209 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:29.209 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:29.209 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:29.209 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:29.209 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:29.209 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.210 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.210 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.210 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.210 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.210 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:29.210 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:29.210 "name": "Existed_Raid", 00:26:29.210 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:29.210 "strip_size_kb": 0, 00:26:29.210 "state": "configuring", 00:26:29.210 "raid_level": "raid1", 00:26:29.210 "superblock": true, 00:26:29.210 "num_base_bdevs": 4, 00:26:29.210 "num_base_bdevs_discovered": 2, 00:26:29.210 "num_base_bdevs_operational": 4, 00:26:29.210 "base_bdevs_list": [ 00:26:29.210 { 00:26:29.210 "name": "BaseBdev1", 00:26:29.210 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:29.210 "is_configured": true, 00:26:29.210 "data_offset": 2048, 00:26:29.210 "data_size": 63488 00:26:29.210 }, 00:26:29.210 { 00:26:29.210 "name": null, 00:26:29.210 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:29.210 "is_configured": false, 00:26:29.210 "data_offset": 2048, 00:26:29.210 "data_size": 63488 00:26:29.210 }, 00:26:29.210 { 00:26:29.210 "name": null, 00:26:29.210 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:29.210 "is_configured": false, 00:26:29.210 "data_offset": 2048, 00:26:29.210 "data_size": 63488 00:26:29.210 }, 00:26:29.210 { 00:26:29.210 "name": "BaseBdev4", 00:26:29.210 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:29.210 "is_configured": true, 00:26:29.210 "data_offset": 2048, 00:26:29.210 "data_size": 63488 00:26:29.210 } 00:26:29.210 ] 00:26:29.210 }' 00:26:29.210 11:38:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:29.210 11:38:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:30.158 11:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.158 11:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:30.158 11:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:30.158 11:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:30.443 [2024-07-13 11:38:05.017088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.443 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.712 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.712 "name": "Existed_Raid", 00:26:30.712 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:30.712 "strip_size_kb": 0, 00:26:30.712 "state": "configuring", 00:26:30.712 "raid_level": "raid1", 00:26:30.712 "superblock": true, 00:26:30.712 "num_base_bdevs": 4, 00:26:30.712 "num_base_bdevs_discovered": 3, 00:26:30.712 "num_base_bdevs_operational": 4, 00:26:30.712 "base_bdevs_list": [ 00:26:30.712 { 00:26:30.712 "name": "BaseBdev1", 00:26:30.712 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:30.712 "is_configured": true, 00:26:30.712 "data_offset": 2048, 00:26:30.712 "data_size": 63488 00:26:30.712 }, 00:26:30.712 { 00:26:30.712 "name": null, 00:26:30.712 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:30.712 "is_configured": false, 00:26:30.712 "data_offset": 2048, 00:26:30.712 "data_size": 63488 00:26:30.712 }, 00:26:30.712 { 00:26:30.712 "name": "BaseBdev3", 00:26:30.712 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:30.712 "is_configured": true, 00:26:30.712 "data_offset": 2048, 00:26:30.712 "data_size": 63488 00:26:30.712 }, 00:26:30.712 { 00:26:30.712 "name": "BaseBdev4", 00:26:30.712 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:30.712 "is_configured": true, 00:26:30.712 "data_offset": 2048, 00:26:30.712 "data_size": 63488 00:26:30.712 } 00:26:30.712 ] 00:26:30.712 }' 00:26:30.712 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.712 11:38:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.279 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.279 11:38:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:31.538 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:31.538 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:31.796 [2024-07-13 11:38:06.425408] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.796 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:32.054 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:32.054 "name": "Existed_Raid", 00:26:32.054 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:32.054 "strip_size_kb": 0, 00:26:32.054 "state": "configuring", 00:26:32.054 "raid_level": "raid1", 00:26:32.054 "superblock": true, 00:26:32.054 "num_base_bdevs": 4, 00:26:32.054 "num_base_bdevs_discovered": 2, 00:26:32.054 "num_base_bdevs_operational": 4, 00:26:32.054 "base_bdevs_list": [ 00:26:32.054 { 00:26:32.054 "name": null, 00:26:32.054 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:32.054 "is_configured": false, 00:26:32.054 "data_offset": 2048, 00:26:32.054 "data_size": 63488 00:26:32.054 }, 00:26:32.054 { 00:26:32.054 "name": null, 00:26:32.054 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:32.054 "is_configured": false, 00:26:32.054 "data_offset": 2048, 00:26:32.054 "data_size": 63488 00:26:32.054 }, 00:26:32.054 { 00:26:32.054 "name": "BaseBdev3", 00:26:32.054 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:32.054 "is_configured": true, 00:26:32.054 "data_offset": 2048, 00:26:32.054 "data_size": 63488 00:26:32.054 }, 00:26:32.054 { 00:26:32.054 "name": "BaseBdev4", 00:26:32.054 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:32.054 "is_configured": true, 00:26:32.054 "data_offset": 2048, 00:26:32.054 "data_size": 63488 00:26:32.054 } 00:26:32.054 ] 00:26:32.054 }' 00:26:32.054 11:38:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:32.054 11:38:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.620 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.620 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:32.878 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:32.878 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:33.136 [2024-07-13 11:38:07.720690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.136 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.395 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.395 "name": "Existed_Raid", 00:26:33.395 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:33.395 "strip_size_kb": 0, 00:26:33.395 "state": "configuring", 00:26:33.395 "raid_level": "raid1", 00:26:33.395 "superblock": true, 00:26:33.395 "num_base_bdevs": 4, 00:26:33.395 "num_base_bdevs_discovered": 3, 00:26:33.395 "num_base_bdevs_operational": 4, 00:26:33.395 "base_bdevs_list": [ 00:26:33.395 { 00:26:33.395 "name": null, 00:26:33.395 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:33.395 "is_configured": false, 00:26:33.395 "data_offset": 2048, 00:26:33.395 "data_size": 63488 00:26:33.395 }, 00:26:33.395 { 00:26:33.395 "name": "BaseBdev2", 00:26:33.395 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:33.395 "is_configured": true, 00:26:33.395 "data_offset": 2048, 00:26:33.395 "data_size": 63488 00:26:33.395 }, 00:26:33.395 { 00:26:33.395 "name": "BaseBdev3", 00:26:33.395 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:33.395 "is_configured": true, 00:26:33.395 "data_offset": 2048, 00:26:33.395 "data_size": 63488 00:26:33.395 }, 00:26:33.395 { 00:26:33.395 "name": "BaseBdev4", 00:26:33.395 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:33.395 "is_configured": true, 00:26:33.395 "data_offset": 2048, 00:26:33.395 "data_size": 63488 00:26:33.395 } 00:26:33.395 ] 00:26:33.395 }' 00:26:33.395 11:38:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.395 11:38:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.964 11:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.964 11:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:34.224 11:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:34.224 11:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.224 11:38:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:34.483 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 32061674-9fbc-4310-8a09-c86381202a6f 00:26:34.741 [2024-07-13 11:38:09.274926] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:34.741 [2024-07-13 11:38:09.275348] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:26:34.741 NewBaseBdev 00:26:34.741 [2024-07-13 11:38:09.275526] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:34.741 [2024-07-13 11:38:09.275719] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:34.741 [2024-07-13 11:38:09.276144] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:26:34.741 [2024-07-13 11:38:09.276287] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:26:34.741 [2024-07-13 11:38:09.276519] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:34.741 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:34.741 11:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:34.741 11:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:34.741 11:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:34.741 11:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:34.741 11:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:34.741 11:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:35.000 11:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:35.258 [ 00:26:35.258 { 00:26:35.258 "name": "NewBaseBdev", 00:26:35.258 "aliases": [ 00:26:35.258 "32061674-9fbc-4310-8a09-c86381202a6f" 00:26:35.258 ], 00:26:35.258 "product_name": "Malloc disk", 00:26:35.258 "block_size": 512, 00:26:35.258 "num_blocks": 65536, 00:26:35.258 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:35.258 "assigned_rate_limits": { 00:26:35.258 "rw_ios_per_sec": 0, 00:26:35.258 "rw_mbytes_per_sec": 0, 00:26:35.258 "r_mbytes_per_sec": 0, 00:26:35.258 "w_mbytes_per_sec": 0 00:26:35.258 }, 00:26:35.258 "claimed": true, 00:26:35.258 "claim_type": "exclusive_write", 00:26:35.258 "zoned": false, 00:26:35.258 "supported_io_types": { 00:26:35.258 "read": true, 00:26:35.258 "write": true, 00:26:35.258 "unmap": true, 00:26:35.258 "flush": true, 00:26:35.258 "reset": true, 00:26:35.258 "nvme_admin": false, 00:26:35.258 "nvme_io": false, 00:26:35.258 "nvme_io_md": false, 00:26:35.258 "write_zeroes": true, 00:26:35.258 "zcopy": true, 00:26:35.258 "get_zone_info": false, 00:26:35.258 "zone_management": false, 00:26:35.258 "zone_append": false, 00:26:35.258 "compare": false, 00:26:35.258 "compare_and_write": false, 00:26:35.258 "abort": true, 00:26:35.258 "seek_hole": false, 00:26:35.258 "seek_data": false, 00:26:35.258 "copy": true, 00:26:35.258 "nvme_iov_md": false 00:26:35.258 }, 00:26:35.258 "memory_domains": [ 00:26:35.258 { 00:26:35.258 "dma_device_id": "system", 00:26:35.258 "dma_device_type": 1 00:26:35.258 }, 00:26:35.258 { 00:26:35.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.258 "dma_device_type": 2 00:26:35.258 } 00:26:35.258 ], 00:26:35.258 "driver_specific": {} 00:26:35.258 } 00:26:35.258 ] 00:26:35.258 11:38:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:35.258 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:35.258 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:35.258 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:35.258 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:35.259 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:35.259 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:35.259 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:35.259 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:35.259 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:35.259 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:35.259 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.259 11:38:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:35.517 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:35.517 "name": "Existed_Raid", 00:26:35.517 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:35.517 "strip_size_kb": 0, 00:26:35.517 "state": "online", 00:26:35.517 "raid_level": "raid1", 00:26:35.517 "superblock": true, 00:26:35.517 "num_base_bdevs": 4, 00:26:35.517 "num_base_bdevs_discovered": 4, 00:26:35.517 "num_base_bdevs_operational": 4, 00:26:35.517 "base_bdevs_list": [ 00:26:35.517 { 00:26:35.517 "name": "NewBaseBdev", 00:26:35.517 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:35.517 "is_configured": true, 00:26:35.517 "data_offset": 2048, 00:26:35.517 "data_size": 63488 00:26:35.517 }, 00:26:35.517 { 00:26:35.517 "name": "BaseBdev2", 00:26:35.518 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:35.518 "is_configured": true, 00:26:35.518 "data_offset": 2048, 00:26:35.518 "data_size": 63488 00:26:35.518 }, 00:26:35.518 { 00:26:35.518 "name": "BaseBdev3", 00:26:35.518 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:35.518 "is_configured": true, 00:26:35.518 "data_offset": 2048, 00:26:35.518 "data_size": 63488 00:26:35.518 }, 00:26:35.518 { 00:26:35.518 "name": "BaseBdev4", 00:26:35.518 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:35.518 "is_configured": true, 00:26:35.518 "data_offset": 2048, 00:26:35.518 "data_size": 63488 00:26:35.518 } 00:26:35.518 ] 00:26:35.518 }' 00:26:35.518 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:35.518 11:38:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:36.086 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:36.086 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:36.086 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:36.086 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:36.086 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:36.086 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:36.086 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:36.086 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:36.346 [2024-07-13 11:38:10.863717] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:36.346 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:36.346 "name": "Existed_Raid", 00:26:36.346 "aliases": [ 00:26:36.346 "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe" 00:26:36.346 ], 00:26:36.346 "product_name": "Raid Volume", 00:26:36.346 "block_size": 512, 00:26:36.346 "num_blocks": 63488, 00:26:36.346 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:36.346 "assigned_rate_limits": { 00:26:36.346 "rw_ios_per_sec": 0, 00:26:36.346 "rw_mbytes_per_sec": 0, 00:26:36.346 "r_mbytes_per_sec": 0, 00:26:36.346 "w_mbytes_per_sec": 0 00:26:36.346 }, 00:26:36.346 "claimed": false, 00:26:36.346 "zoned": false, 00:26:36.346 "supported_io_types": { 00:26:36.346 "read": true, 00:26:36.346 "write": true, 00:26:36.346 "unmap": false, 00:26:36.346 "flush": false, 00:26:36.346 "reset": true, 00:26:36.346 "nvme_admin": false, 00:26:36.346 "nvme_io": false, 00:26:36.346 "nvme_io_md": false, 00:26:36.346 "write_zeroes": true, 00:26:36.346 "zcopy": false, 00:26:36.346 "get_zone_info": false, 00:26:36.346 "zone_management": false, 00:26:36.346 "zone_append": false, 00:26:36.346 "compare": false, 00:26:36.346 "compare_and_write": false, 00:26:36.346 "abort": false, 00:26:36.346 "seek_hole": false, 00:26:36.346 "seek_data": false, 00:26:36.346 "copy": false, 00:26:36.346 "nvme_iov_md": false 00:26:36.346 }, 00:26:36.346 "memory_domains": [ 00:26:36.346 { 00:26:36.346 "dma_device_id": "system", 00:26:36.346 "dma_device_type": 1 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.346 "dma_device_type": 2 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "dma_device_id": "system", 00:26:36.346 "dma_device_type": 1 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.346 "dma_device_type": 2 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "dma_device_id": "system", 00:26:36.346 "dma_device_type": 1 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.346 "dma_device_type": 2 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "dma_device_id": "system", 00:26:36.346 "dma_device_type": 1 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.346 "dma_device_type": 2 00:26:36.346 } 00:26:36.346 ], 00:26:36.346 "driver_specific": { 00:26:36.346 "raid": { 00:26:36.346 "uuid": "56cf21c5-a6cf-4b00-8a2b-b293eadd29fe", 00:26:36.346 "strip_size_kb": 0, 00:26:36.346 "state": "online", 00:26:36.346 "raid_level": "raid1", 00:26:36.346 "superblock": true, 00:26:36.346 "num_base_bdevs": 4, 00:26:36.346 "num_base_bdevs_discovered": 4, 00:26:36.346 "num_base_bdevs_operational": 4, 00:26:36.346 "base_bdevs_list": [ 00:26:36.346 { 00:26:36.346 "name": "NewBaseBdev", 00:26:36.346 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:36.346 "is_configured": true, 00:26:36.346 "data_offset": 2048, 00:26:36.346 "data_size": 63488 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "name": "BaseBdev2", 00:26:36.346 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:36.346 "is_configured": true, 00:26:36.346 "data_offset": 2048, 00:26:36.346 "data_size": 63488 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "name": "BaseBdev3", 00:26:36.346 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:36.346 "is_configured": true, 00:26:36.346 "data_offset": 2048, 00:26:36.346 "data_size": 63488 00:26:36.346 }, 00:26:36.346 { 00:26:36.346 "name": "BaseBdev4", 00:26:36.346 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:36.346 "is_configured": true, 00:26:36.346 "data_offset": 2048, 00:26:36.346 "data_size": 63488 00:26:36.346 } 00:26:36.346 ] 00:26:36.346 } 00:26:36.346 } 00:26:36.346 }' 00:26:36.346 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:36.346 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:36.346 BaseBdev2 00:26:36.346 BaseBdev3 00:26:36.346 BaseBdev4' 00:26:36.346 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:36.346 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:36.346 11:38:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:36.606 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:36.606 "name": "NewBaseBdev", 00:26:36.606 "aliases": [ 00:26:36.606 "32061674-9fbc-4310-8a09-c86381202a6f" 00:26:36.606 ], 00:26:36.606 "product_name": "Malloc disk", 00:26:36.606 "block_size": 512, 00:26:36.606 "num_blocks": 65536, 00:26:36.606 "uuid": "32061674-9fbc-4310-8a09-c86381202a6f", 00:26:36.606 "assigned_rate_limits": { 00:26:36.606 "rw_ios_per_sec": 0, 00:26:36.606 "rw_mbytes_per_sec": 0, 00:26:36.606 "r_mbytes_per_sec": 0, 00:26:36.606 "w_mbytes_per_sec": 0 00:26:36.606 }, 00:26:36.606 "claimed": true, 00:26:36.606 "claim_type": "exclusive_write", 00:26:36.606 "zoned": false, 00:26:36.606 "supported_io_types": { 00:26:36.606 "read": true, 00:26:36.606 "write": true, 00:26:36.606 "unmap": true, 00:26:36.606 "flush": true, 00:26:36.606 "reset": true, 00:26:36.606 "nvme_admin": false, 00:26:36.606 "nvme_io": false, 00:26:36.606 "nvme_io_md": false, 00:26:36.606 "write_zeroes": true, 00:26:36.606 "zcopy": true, 00:26:36.606 "get_zone_info": false, 00:26:36.606 "zone_management": false, 00:26:36.606 "zone_append": false, 00:26:36.606 "compare": false, 00:26:36.606 "compare_and_write": false, 00:26:36.606 "abort": true, 00:26:36.606 "seek_hole": false, 00:26:36.606 "seek_data": false, 00:26:36.606 "copy": true, 00:26:36.606 "nvme_iov_md": false 00:26:36.606 }, 00:26:36.606 "memory_domains": [ 00:26:36.606 { 00:26:36.606 "dma_device_id": "system", 00:26:36.606 "dma_device_type": 1 00:26:36.606 }, 00:26:36.606 { 00:26:36.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.606 "dma_device_type": 2 00:26:36.606 } 00:26:36.606 ], 00:26:36.606 "driver_specific": {} 00:26:36.606 }' 00:26:36.606 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:36.606 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:36.606 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:36.606 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:36.865 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:36.865 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:36.865 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:36.865 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:36.865 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:36.865 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:36.865 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:37.123 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:37.123 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:37.123 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:37.123 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:37.124 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:37.124 "name": "BaseBdev2", 00:26:37.124 "aliases": [ 00:26:37.124 "a7a3f386-3c18-4eba-a86e-1c0b2fd36945" 00:26:37.124 ], 00:26:37.124 "product_name": "Malloc disk", 00:26:37.124 "block_size": 512, 00:26:37.124 "num_blocks": 65536, 00:26:37.124 "uuid": "a7a3f386-3c18-4eba-a86e-1c0b2fd36945", 00:26:37.124 "assigned_rate_limits": { 00:26:37.124 "rw_ios_per_sec": 0, 00:26:37.124 "rw_mbytes_per_sec": 0, 00:26:37.124 "r_mbytes_per_sec": 0, 00:26:37.124 "w_mbytes_per_sec": 0 00:26:37.124 }, 00:26:37.124 "claimed": true, 00:26:37.124 "claim_type": "exclusive_write", 00:26:37.124 "zoned": false, 00:26:37.124 "supported_io_types": { 00:26:37.124 "read": true, 00:26:37.124 "write": true, 00:26:37.124 "unmap": true, 00:26:37.124 "flush": true, 00:26:37.124 "reset": true, 00:26:37.124 "nvme_admin": false, 00:26:37.124 "nvme_io": false, 00:26:37.124 "nvme_io_md": false, 00:26:37.124 "write_zeroes": true, 00:26:37.124 "zcopy": true, 00:26:37.124 "get_zone_info": false, 00:26:37.124 "zone_management": false, 00:26:37.124 "zone_append": false, 00:26:37.124 "compare": false, 00:26:37.124 "compare_and_write": false, 00:26:37.124 "abort": true, 00:26:37.124 "seek_hole": false, 00:26:37.124 "seek_data": false, 00:26:37.124 "copy": true, 00:26:37.124 "nvme_iov_md": false 00:26:37.124 }, 00:26:37.124 "memory_domains": [ 00:26:37.124 { 00:26:37.124 "dma_device_id": "system", 00:26:37.124 "dma_device_type": 1 00:26:37.124 }, 00:26:37.124 { 00:26:37.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.124 "dma_device_type": 2 00:26:37.124 } 00:26:37.124 ], 00:26:37.124 "driver_specific": {} 00:26:37.124 }' 00:26:37.124 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:37.382 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:37.382 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:37.382 11:38:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:37.382 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:37.382 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:37.382 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:37.640 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:37.640 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:37.640 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:37.640 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:37.640 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:37.640 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:37.640 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:37.640 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:37.898 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:37.898 "name": "BaseBdev3", 00:26:37.898 "aliases": [ 00:26:37.898 "db6a8b0b-f306-41ce-89d2-076c4badf597" 00:26:37.898 ], 00:26:37.898 "product_name": "Malloc disk", 00:26:37.898 "block_size": 512, 00:26:37.898 "num_blocks": 65536, 00:26:37.898 "uuid": "db6a8b0b-f306-41ce-89d2-076c4badf597", 00:26:37.898 "assigned_rate_limits": { 00:26:37.898 "rw_ios_per_sec": 0, 00:26:37.898 "rw_mbytes_per_sec": 0, 00:26:37.898 "r_mbytes_per_sec": 0, 00:26:37.898 "w_mbytes_per_sec": 0 00:26:37.898 }, 00:26:37.898 "claimed": true, 00:26:37.898 "claim_type": "exclusive_write", 00:26:37.898 "zoned": false, 00:26:37.898 "supported_io_types": { 00:26:37.898 "read": true, 00:26:37.898 "write": true, 00:26:37.898 "unmap": true, 00:26:37.898 "flush": true, 00:26:37.898 "reset": true, 00:26:37.898 "nvme_admin": false, 00:26:37.898 "nvme_io": false, 00:26:37.898 "nvme_io_md": false, 00:26:37.898 "write_zeroes": true, 00:26:37.898 "zcopy": true, 00:26:37.898 "get_zone_info": false, 00:26:37.898 "zone_management": false, 00:26:37.898 "zone_append": false, 00:26:37.898 "compare": false, 00:26:37.898 "compare_and_write": false, 00:26:37.898 "abort": true, 00:26:37.898 "seek_hole": false, 00:26:37.898 "seek_data": false, 00:26:37.898 "copy": true, 00:26:37.898 "nvme_iov_md": false 00:26:37.898 }, 00:26:37.898 "memory_domains": [ 00:26:37.898 { 00:26:37.898 "dma_device_id": "system", 00:26:37.898 "dma_device_type": 1 00:26:37.898 }, 00:26:37.898 { 00:26:37.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.898 "dma_device_type": 2 00:26:37.898 } 00:26:37.898 ], 00:26:37.898 "driver_specific": {} 00:26:37.898 }' 00:26:37.898 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:37.898 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:38.157 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:38.157 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:38.157 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:38.157 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:38.157 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:38.157 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:38.157 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:38.157 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:38.414 11:38:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:38.414 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:38.414 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:38.414 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:38.414 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:38.673 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:38.673 "name": "BaseBdev4", 00:26:38.673 "aliases": [ 00:26:38.673 "cabaa99f-9bed-459c-8177-e984bff7d912" 00:26:38.673 ], 00:26:38.673 "product_name": "Malloc disk", 00:26:38.673 "block_size": 512, 00:26:38.673 "num_blocks": 65536, 00:26:38.673 "uuid": "cabaa99f-9bed-459c-8177-e984bff7d912", 00:26:38.673 "assigned_rate_limits": { 00:26:38.673 "rw_ios_per_sec": 0, 00:26:38.673 "rw_mbytes_per_sec": 0, 00:26:38.673 "r_mbytes_per_sec": 0, 00:26:38.673 "w_mbytes_per_sec": 0 00:26:38.673 }, 00:26:38.673 "claimed": true, 00:26:38.673 "claim_type": "exclusive_write", 00:26:38.673 "zoned": false, 00:26:38.673 "supported_io_types": { 00:26:38.673 "read": true, 00:26:38.673 "write": true, 00:26:38.673 "unmap": true, 00:26:38.673 "flush": true, 00:26:38.673 "reset": true, 00:26:38.673 "nvme_admin": false, 00:26:38.673 "nvme_io": false, 00:26:38.673 "nvme_io_md": false, 00:26:38.673 "write_zeroes": true, 00:26:38.673 "zcopy": true, 00:26:38.673 "get_zone_info": false, 00:26:38.673 "zone_management": false, 00:26:38.673 "zone_append": false, 00:26:38.673 "compare": false, 00:26:38.673 "compare_and_write": false, 00:26:38.673 "abort": true, 00:26:38.673 "seek_hole": false, 00:26:38.673 "seek_data": false, 00:26:38.673 "copy": true, 00:26:38.673 "nvme_iov_md": false 00:26:38.673 }, 00:26:38.673 "memory_domains": [ 00:26:38.673 { 00:26:38.673 "dma_device_id": "system", 00:26:38.673 "dma_device_type": 1 00:26:38.673 }, 00:26:38.673 { 00:26:38.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:38.673 "dma_device_type": 2 00:26:38.673 } 00:26:38.673 ], 00:26:38.673 "driver_specific": {} 00:26:38.673 }' 00:26:38.673 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:38.673 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:38.673 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:38.673 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:38.673 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:38.931 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:38.931 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:38.931 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:38.931 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:38.931 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:38.931 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:38.931 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:38.931 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:39.189 [2024-07-13 11:38:13.924563] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:39.189 [2024-07-13 11:38:13.924783] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:39.189 [2024-07-13 11:38:13.925085] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:39.189 [2024-07-13 11:38:13.925605] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:39.189 [2024-07-13 11:38:13.925831] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 142244 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 142244 ']' 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 142244 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142244 00:26:39.447 killing process with pid 142244 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142244' 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 142244 00:26:39.447 11:38:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 142244 00:26:39.447 [2024-07-13 11:38:13.959410] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:39.706 [2024-07-13 11:38:14.227319] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:40.642 ************************************ 00:26:40.642 END TEST raid_state_function_test_sb 00:26:40.642 ************************************ 00:26:40.642 11:38:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:40.642 00:26:40.642 real 0m34.126s 00:26:40.642 user 1m4.411s 00:26:40.642 sys 0m3.518s 00:26:40.642 11:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:40.642 11:38:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.642 11:38:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:40.642 11:38:15 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:26:40.642 11:38:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:40.642 11:38:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:40.642 11:38:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:40.642 ************************************ 00:26:40.642 START TEST raid_superblock_test 00:26:40.642 ************************************ 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=143395 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 143395 /var/tmp/spdk-raid.sock 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 143395 ']' 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:40.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.642 11:38:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.900 [2024-07-13 11:38:15.395228] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:40.900 [2024-07-13 11:38:15.395677] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143395 ] 00:26:40.900 [2024-07-13 11:38:15.571632] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.158 [2024-07-13 11:38:15.805998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.416 [2024-07-13 11:38:15.974185] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:41.674 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:41.932 malloc1 00:26:41.932 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:42.213 [2024-07-13 11:38:16.847848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:42.213 [2024-07-13 11:38:16.848088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.213 [2024-07-13 11:38:16.848157] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:26:42.213 [2024-07-13 11:38:16.848400] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.213 [2024-07-13 11:38:16.850644] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.213 [2024-07-13 11:38:16.850820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:42.213 pt1 00:26:42.213 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:42.213 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:42.213 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:42.213 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:42.213 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:42.213 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:42.213 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:42.213 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:42.213 11:38:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:42.472 malloc2 00:26:42.472 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:42.730 [2024-07-13 11:38:17.281084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:42.730 [2024-07-13 11:38:17.281348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.730 [2024-07-13 11:38:17.281516] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:26:42.730 [2024-07-13 11:38:17.281630] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.730 [2024-07-13 11:38:17.283831] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.730 [2024-07-13 11:38:17.284006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:42.730 pt2 00:26:42.730 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:42.730 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:42.730 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:26:42.730 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:26:42.730 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:42.730 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:42.730 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:42.730 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:42.730 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:42.988 malloc3 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:42.988 [2024-07-13 11:38:17.673809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:42.988 [2024-07-13 11:38:17.674006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.988 [2024-07-13 11:38:17.674068] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:26:42.988 [2024-07-13 11:38:17.674171] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.988 [2024-07-13 11:38:17.676268] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.988 [2024-07-13 11:38:17.676409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:42.988 pt3 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:42.988 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:43.246 malloc4 00:26:43.246 11:38:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:43.504 [2024-07-13 11:38:18.074492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:43.504 [2024-07-13 11:38:18.074714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.504 [2024-07-13 11:38:18.074782] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:26:43.504 [2024-07-13 11:38:18.075038] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.504 [2024-07-13 11:38:18.077175] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.504 [2024-07-13 11:38:18.077330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:43.504 pt4 00:26:43.504 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:43.504 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:43.504 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:43.763 [2024-07-13 11:38:18.262556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:43.763 [2024-07-13 11:38:18.264483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:43.763 [2024-07-13 11:38:18.264665] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:43.763 [2024-07-13 11:38:18.264758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:43.763 [2024-07-13 11:38:18.265120] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:26:43.763 [2024-07-13 11:38:18.265257] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:43.763 [2024-07-13 11:38:18.265423] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:26:43.763 [2024-07-13 11:38:18.265939] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:26:43.763 [2024-07-13 11:38:18.266059] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:26:43.763 [2024-07-13 11:38:18.266291] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:43.763 "name": "raid_bdev1", 00:26:43.763 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:43.763 "strip_size_kb": 0, 00:26:43.763 "state": "online", 00:26:43.763 "raid_level": "raid1", 00:26:43.763 "superblock": true, 00:26:43.763 "num_base_bdevs": 4, 00:26:43.763 "num_base_bdevs_discovered": 4, 00:26:43.763 "num_base_bdevs_operational": 4, 00:26:43.763 "base_bdevs_list": [ 00:26:43.763 { 00:26:43.763 "name": "pt1", 00:26:43.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:43.763 "is_configured": true, 00:26:43.763 "data_offset": 2048, 00:26:43.763 "data_size": 63488 00:26:43.763 }, 00:26:43.763 { 00:26:43.763 "name": "pt2", 00:26:43.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:43.763 "is_configured": true, 00:26:43.763 "data_offset": 2048, 00:26:43.763 "data_size": 63488 00:26:43.763 }, 00:26:43.763 { 00:26:43.763 "name": "pt3", 00:26:43.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:43.763 "is_configured": true, 00:26:43.763 "data_offset": 2048, 00:26:43.763 "data_size": 63488 00:26:43.763 }, 00:26:43.763 { 00:26:43.763 "name": "pt4", 00:26:43.763 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:43.763 "is_configured": true, 00:26:43.763 "data_offset": 2048, 00:26:43.763 "data_size": 63488 00:26:43.763 } 00:26:43.763 ] 00:26:43.763 }' 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:43.763 11:38:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.699 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:26:44.699 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:44.699 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:44.699 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:44.699 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:44.699 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:44.699 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:44.699 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:44.699 [2024-07-13 11:38:19.302950] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:44.699 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:44.699 "name": "raid_bdev1", 00:26:44.699 "aliases": [ 00:26:44.699 "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89" 00:26:44.699 ], 00:26:44.699 "product_name": "Raid Volume", 00:26:44.699 "block_size": 512, 00:26:44.699 "num_blocks": 63488, 00:26:44.699 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:44.699 "assigned_rate_limits": { 00:26:44.699 "rw_ios_per_sec": 0, 00:26:44.699 "rw_mbytes_per_sec": 0, 00:26:44.699 "r_mbytes_per_sec": 0, 00:26:44.699 "w_mbytes_per_sec": 0 00:26:44.699 }, 00:26:44.699 "claimed": false, 00:26:44.699 "zoned": false, 00:26:44.699 "supported_io_types": { 00:26:44.699 "read": true, 00:26:44.699 "write": true, 00:26:44.699 "unmap": false, 00:26:44.699 "flush": false, 00:26:44.699 "reset": true, 00:26:44.699 "nvme_admin": false, 00:26:44.699 "nvme_io": false, 00:26:44.699 "nvme_io_md": false, 00:26:44.699 "write_zeroes": true, 00:26:44.699 "zcopy": false, 00:26:44.699 "get_zone_info": false, 00:26:44.699 "zone_management": false, 00:26:44.699 "zone_append": false, 00:26:44.699 "compare": false, 00:26:44.699 "compare_and_write": false, 00:26:44.699 "abort": false, 00:26:44.700 "seek_hole": false, 00:26:44.700 "seek_data": false, 00:26:44.700 "copy": false, 00:26:44.700 "nvme_iov_md": false 00:26:44.700 }, 00:26:44.700 "memory_domains": [ 00:26:44.700 { 00:26:44.700 "dma_device_id": "system", 00:26:44.700 "dma_device_type": 1 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.700 "dma_device_type": 2 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "dma_device_id": "system", 00:26:44.700 "dma_device_type": 1 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.700 "dma_device_type": 2 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "dma_device_id": "system", 00:26:44.700 "dma_device_type": 1 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.700 "dma_device_type": 2 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "dma_device_id": "system", 00:26:44.700 "dma_device_type": 1 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.700 "dma_device_type": 2 00:26:44.700 } 00:26:44.700 ], 00:26:44.700 "driver_specific": { 00:26:44.700 "raid": { 00:26:44.700 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:44.700 "strip_size_kb": 0, 00:26:44.700 "state": "online", 00:26:44.700 "raid_level": "raid1", 00:26:44.700 "superblock": true, 00:26:44.700 "num_base_bdevs": 4, 00:26:44.700 "num_base_bdevs_discovered": 4, 00:26:44.700 "num_base_bdevs_operational": 4, 00:26:44.700 "base_bdevs_list": [ 00:26:44.700 { 00:26:44.700 "name": "pt1", 00:26:44.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:44.700 "is_configured": true, 00:26:44.700 "data_offset": 2048, 00:26:44.700 "data_size": 63488 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "name": "pt2", 00:26:44.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:44.700 "is_configured": true, 00:26:44.700 "data_offset": 2048, 00:26:44.700 "data_size": 63488 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "name": "pt3", 00:26:44.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:44.700 "is_configured": true, 00:26:44.700 "data_offset": 2048, 00:26:44.700 "data_size": 63488 00:26:44.700 }, 00:26:44.700 { 00:26:44.700 "name": "pt4", 00:26:44.700 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:44.700 "is_configured": true, 00:26:44.700 "data_offset": 2048, 00:26:44.700 "data_size": 63488 00:26:44.700 } 00:26:44.700 ] 00:26:44.700 } 00:26:44.700 } 00:26:44.700 }' 00:26:44.700 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:44.700 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:44.700 pt2 00:26:44.700 pt3 00:26:44.700 pt4' 00:26:44.700 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:44.700 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:44.700 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:44.959 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:44.959 "name": "pt1", 00:26:44.959 "aliases": [ 00:26:44.959 "00000000-0000-0000-0000-000000000001" 00:26:44.959 ], 00:26:44.959 "product_name": "passthru", 00:26:44.959 "block_size": 512, 00:26:44.959 "num_blocks": 65536, 00:26:44.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:44.959 "assigned_rate_limits": { 00:26:44.959 "rw_ios_per_sec": 0, 00:26:44.959 "rw_mbytes_per_sec": 0, 00:26:44.959 "r_mbytes_per_sec": 0, 00:26:44.959 "w_mbytes_per_sec": 0 00:26:44.959 }, 00:26:44.959 "claimed": true, 00:26:44.959 "claim_type": "exclusive_write", 00:26:44.959 "zoned": false, 00:26:44.959 "supported_io_types": { 00:26:44.959 "read": true, 00:26:44.959 "write": true, 00:26:44.959 "unmap": true, 00:26:44.959 "flush": true, 00:26:44.959 "reset": true, 00:26:44.959 "nvme_admin": false, 00:26:44.959 "nvme_io": false, 00:26:44.959 "nvme_io_md": false, 00:26:44.959 "write_zeroes": true, 00:26:44.959 "zcopy": true, 00:26:44.959 "get_zone_info": false, 00:26:44.959 "zone_management": false, 00:26:44.959 "zone_append": false, 00:26:44.959 "compare": false, 00:26:44.959 "compare_and_write": false, 00:26:44.959 "abort": true, 00:26:44.959 "seek_hole": false, 00:26:44.959 "seek_data": false, 00:26:44.959 "copy": true, 00:26:44.959 "nvme_iov_md": false 00:26:44.959 }, 00:26:44.959 "memory_domains": [ 00:26:44.959 { 00:26:44.959 "dma_device_id": "system", 00:26:44.959 "dma_device_type": 1 00:26:44.959 }, 00:26:44.959 { 00:26:44.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:44.959 "dma_device_type": 2 00:26:44.959 } 00:26:44.959 ], 00:26:44.959 "driver_specific": { 00:26:44.959 "passthru": { 00:26:44.959 "name": "pt1", 00:26:44.959 "base_bdev_name": "malloc1" 00:26:44.959 } 00:26:44.959 } 00:26:44.959 }' 00:26:44.959 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:44.959 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:45.218 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:45.218 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.218 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.218 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:45.218 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.218 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.476 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:45.476 11:38:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:45.476 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:45.476 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:45.476 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:45.476 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:45.476 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:45.735 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:45.735 "name": "pt2", 00:26:45.735 "aliases": [ 00:26:45.735 "00000000-0000-0000-0000-000000000002" 00:26:45.735 ], 00:26:45.735 "product_name": "passthru", 00:26:45.735 "block_size": 512, 00:26:45.735 "num_blocks": 65536, 00:26:45.735 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:45.735 "assigned_rate_limits": { 00:26:45.735 "rw_ios_per_sec": 0, 00:26:45.735 "rw_mbytes_per_sec": 0, 00:26:45.735 "r_mbytes_per_sec": 0, 00:26:45.735 "w_mbytes_per_sec": 0 00:26:45.735 }, 00:26:45.735 "claimed": true, 00:26:45.735 "claim_type": "exclusive_write", 00:26:45.735 "zoned": false, 00:26:45.735 "supported_io_types": { 00:26:45.735 "read": true, 00:26:45.735 "write": true, 00:26:45.735 "unmap": true, 00:26:45.735 "flush": true, 00:26:45.735 "reset": true, 00:26:45.735 "nvme_admin": false, 00:26:45.735 "nvme_io": false, 00:26:45.735 "nvme_io_md": false, 00:26:45.735 "write_zeroes": true, 00:26:45.735 "zcopy": true, 00:26:45.735 "get_zone_info": false, 00:26:45.735 "zone_management": false, 00:26:45.735 "zone_append": false, 00:26:45.735 "compare": false, 00:26:45.735 "compare_and_write": false, 00:26:45.735 "abort": true, 00:26:45.735 "seek_hole": false, 00:26:45.735 "seek_data": false, 00:26:45.735 "copy": true, 00:26:45.735 "nvme_iov_md": false 00:26:45.735 }, 00:26:45.735 "memory_domains": [ 00:26:45.735 { 00:26:45.735 "dma_device_id": "system", 00:26:45.735 "dma_device_type": 1 00:26:45.735 }, 00:26:45.735 { 00:26:45.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:45.736 "dma_device_type": 2 00:26:45.736 } 00:26:45.736 ], 00:26:45.736 "driver_specific": { 00:26:45.736 "passthru": { 00:26:45.736 "name": "pt2", 00:26:45.736 "base_bdev_name": "malloc2" 00:26:45.736 } 00:26:45.736 } 00:26:45.736 }' 00:26:45.736 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:45.736 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:45.736 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:45.994 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.994 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:45.994 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:45.994 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.994 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:45.994 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:45.994 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:45.994 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:46.253 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:46.253 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:46.253 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:46.253 11:38:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:46.512 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:46.512 "name": "pt3", 00:26:46.512 "aliases": [ 00:26:46.512 "00000000-0000-0000-0000-000000000003" 00:26:46.512 ], 00:26:46.512 "product_name": "passthru", 00:26:46.512 "block_size": 512, 00:26:46.512 "num_blocks": 65536, 00:26:46.512 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:46.512 "assigned_rate_limits": { 00:26:46.512 "rw_ios_per_sec": 0, 00:26:46.512 "rw_mbytes_per_sec": 0, 00:26:46.512 "r_mbytes_per_sec": 0, 00:26:46.512 "w_mbytes_per_sec": 0 00:26:46.512 }, 00:26:46.512 "claimed": true, 00:26:46.512 "claim_type": "exclusive_write", 00:26:46.512 "zoned": false, 00:26:46.512 "supported_io_types": { 00:26:46.512 "read": true, 00:26:46.512 "write": true, 00:26:46.512 "unmap": true, 00:26:46.512 "flush": true, 00:26:46.512 "reset": true, 00:26:46.512 "nvme_admin": false, 00:26:46.512 "nvme_io": false, 00:26:46.512 "nvme_io_md": false, 00:26:46.512 "write_zeroes": true, 00:26:46.512 "zcopy": true, 00:26:46.512 "get_zone_info": false, 00:26:46.512 "zone_management": false, 00:26:46.512 "zone_append": false, 00:26:46.512 "compare": false, 00:26:46.512 "compare_and_write": false, 00:26:46.512 "abort": true, 00:26:46.512 "seek_hole": false, 00:26:46.512 "seek_data": false, 00:26:46.512 "copy": true, 00:26:46.512 "nvme_iov_md": false 00:26:46.512 }, 00:26:46.512 "memory_domains": [ 00:26:46.512 { 00:26:46.512 "dma_device_id": "system", 00:26:46.512 "dma_device_type": 1 00:26:46.512 }, 00:26:46.512 { 00:26:46.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.512 "dma_device_type": 2 00:26:46.512 } 00:26:46.512 ], 00:26:46.512 "driver_specific": { 00:26:46.512 "passthru": { 00:26:46.512 "name": "pt3", 00:26:46.512 "base_bdev_name": "malloc3" 00:26:46.512 } 00:26:46.512 } 00:26:46.512 }' 00:26:46.512 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:46.512 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:46.512 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:46.512 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:46.512 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:46.770 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:47.029 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:47.029 "name": "pt4", 00:26:47.029 "aliases": [ 00:26:47.029 "00000000-0000-0000-0000-000000000004" 00:26:47.029 ], 00:26:47.029 "product_name": "passthru", 00:26:47.029 "block_size": 512, 00:26:47.029 "num_blocks": 65536, 00:26:47.029 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:47.029 "assigned_rate_limits": { 00:26:47.029 "rw_ios_per_sec": 0, 00:26:47.029 "rw_mbytes_per_sec": 0, 00:26:47.029 "r_mbytes_per_sec": 0, 00:26:47.029 "w_mbytes_per_sec": 0 00:26:47.029 }, 00:26:47.029 "claimed": true, 00:26:47.029 "claim_type": "exclusive_write", 00:26:47.029 "zoned": false, 00:26:47.029 "supported_io_types": { 00:26:47.029 "read": true, 00:26:47.029 "write": true, 00:26:47.029 "unmap": true, 00:26:47.029 "flush": true, 00:26:47.029 "reset": true, 00:26:47.029 "nvme_admin": false, 00:26:47.029 "nvme_io": false, 00:26:47.029 "nvme_io_md": false, 00:26:47.029 "write_zeroes": true, 00:26:47.029 "zcopy": true, 00:26:47.029 "get_zone_info": false, 00:26:47.029 "zone_management": false, 00:26:47.029 "zone_append": false, 00:26:47.029 "compare": false, 00:26:47.029 "compare_and_write": false, 00:26:47.029 "abort": true, 00:26:47.029 "seek_hole": false, 00:26:47.029 "seek_data": false, 00:26:47.029 "copy": true, 00:26:47.029 "nvme_iov_md": false 00:26:47.029 }, 00:26:47.029 "memory_domains": [ 00:26:47.029 { 00:26:47.029 "dma_device_id": "system", 00:26:47.029 "dma_device_type": 1 00:26:47.029 }, 00:26:47.029 { 00:26:47.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.029 "dma_device_type": 2 00:26:47.029 } 00:26:47.029 ], 00:26:47.029 "driver_specific": { 00:26:47.029 "passthru": { 00:26:47.029 "name": "pt4", 00:26:47.029 "base_bdev_name": "malloc4" 00:26:47.029 } 00:26:47.029 } 00:26:47.029 }' 00:26:47.029 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:47.288 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:47.288 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:47.288 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:47.288 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:47.288 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:47.288 11:38:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:47.288 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:47.547 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:47.547 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:47.547 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:47.547 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:47.547 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:47.547 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:26:47.806 [2024-07-13 11:38:22.447598] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:47.806 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f7c3b131-a95d-4b5a-85cf-ea54e2cefc89 00:26:47.806 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f7c3b131-a95d-4b5a-85cf-ea54e2cefc89 ']' 00:26:47.806 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:48.065 [2024-07-13 11:38:22.695420] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:48.065 [2024-07-13 11:38:22.695556] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:48.065 [2024-07-13 11:38:22.695705] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:48.065 [2024-07-13 11:38:22.695875] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:48.065 [2024-07-13 11:38:22.695969] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:26:48.065 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.065 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:26:48.322 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:26:48.322 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:26:48.322 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:48.322 11:38:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:48.580 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:48.580 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:48.580 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:48.580 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:48.838 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:48.838 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:49.096 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:49.096 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:49.354 11:38:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:49.354 [2024-07-13 11:38:24.075642] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:49.354 [2024-07-13 11:38:24.077548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:49.354 [2024-07-13 11:38:24.077726] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:49.354 [2024-07-13 11:38:24.077799] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:49.354 [2024-07-13 11:38:24.077993] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:49.354 [2024-07-13 11:38:24.078225] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:49.354 [2024-07-13 11:38:24.078363] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:49.354 [2024-07-13 11:38:24.078497] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:49.354 [2024-07-13 11:38:24.078621] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:49.354 [2024-07-13 11:38:24.078657] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:26:49.354 request: 00:26:49.354 { 00:26:49.354 "name": "raid_bdev1", 00:26:49.354 "raid_level": "raid1", 00:26:49.354 "base_bdevs": [ 00:26:49.354 "malloc1", 00:26:49.354 "malloc2", 00:26:49.354 "malloc3", 00:26:49.354 "malloc4" 00:26:49.354 ], 00:26:49.354 "superblock": false, 00:26:49.354 "method": "bdev_raid_create", 00:26:49.354 "req_id": 1 00:26:49.354 } 00:26:49.354 Got JSON-RPC error response 00:26:49.354 response: 00:26:49.354 { 00:26:49.354 "code": -17, 00:26:49.354 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:49.354 } 00:26:49.354 11:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:26:49.354 11:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:49.354 11:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:49.354 11:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:49.354 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.354 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:26:49.611 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:26:49.611 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:26:49.611 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:49.869 [2024-07-13 11:38:24.515671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:49.869 [2024-07-13 11:38:24.515869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.869 [2024-07-13 11:38:24.515928] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:26:49.869 [2024-07-13 11:38:24.516068] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:49.869 [2024-07-13 11:38:24.518201] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:49.869 [2024-07-13 11:38:24.518372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:49.869 [2024-07-13 11:38:24.518549] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:49.869 [2024-07-13 11:38:24.518701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:49.869 pt1 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.869 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.127 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:50.127 "name": "raid_bdev1", 00:26:50.127 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:50.127 "strip_size_kb": 0, 00:26:50.127 "state": "configuring", 00:26:50.127 "raid_level": "raid1", 00:26:50.127 "superblock": true, 00:26:50.127 "num_base_bdevs": 4, 00:26:50.127 "num_base_bdevs_discovered": 1, 00:26:50.127 "num_base_bdevs_operational": 4, 00:26:50.127 "base_bdevs_list": [ 00:26:50.127 { 00:26:50.127 "name": "pt1", 00:26:50.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:50.127 "is_configured": true, 00:26:50.127 "data_offset": 2048, 00:26:50.127 "data_size": 63488 00:26:50.127 }, 00:26:50.127 { 00:26:50.127 "name": null, 00:26:50.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:50.127 "is_configured": false, 00:26:50.127 "data_offset": 2048, 00:26:50.127 "data_size": 63488 00:26:50.127 }, 00:26:50.127 { 00:26:50.127 "name": null, 00:26:50.127 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:50.127 "is_configured": false, 00:26:50.127 "data_offset": 2048, 00:26:50.127 "data_size": 63488 00:26:50.127 }, 00:26:50.127 { 00:26:50.127 "name": null, 00:26:50.127 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:50.127 "is_configured": false, 00:26:50.127 "data_offset": 2048, 00:26:50.127 "data_size": 63488 00:26:50.127 } 00:26:50.127 ] 00:26:50.127 }' 00:26:50.127 11:38:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.127 11:38:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.059 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:26:51.059 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:51.059 [2024-07-13 11:38:25.707873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:51.059 [2024-07-13 11:38:25.708066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:51.059 [2024-07-13 11:38:25.708137] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:51.059 [2024-07-13 11:38:25.708288] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:51.059 [2024-07-13 11:38:25.708731] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:51.059 [2024-07-13 11:38:25.708879] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:51.059 [2024-07-13 11:38:25.709060] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:51.059 [2024-07-13 11:38:25.709182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:51.059 pt2 00:26:51.059 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:51.317 [2024-07-13 11:38:25.963955] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.317 11:38:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.576 11:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.576 "name": "raid_bdev1", 00:26:51.576 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:51.576 "strip_size_kb": 0, 00:26:51.576 "state": "configuring", 00:26:51.576 "raid_level": "raid1", 00:26:51.576 "superblock": true, 00:26:51.576 "num_base_bdevs": 4, 00:26:51.576 "num_base_bdevs_discovered": 1, 00:26:51.576 "num_base_bdevs_operational": 4, 00:26:51.576 "base_bdevs_list": [ 00:26:51.576 { 00:26:51.576 "name": "pt1", 00:26:51.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:51.576 "is_configured": true, 00:26:51.576 "data_offset": 2048, 00:26:51.576 "data_size": 63488 00:26:51.576 }, 00:26:51.576 { 00:26:51.576 "name": null, 00:26:51.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:51.576 "is_configured": false, 00:26:51.576 "data_offset": 2048, 00:26:51.576 "data_size": 63488 00:26:51.576 }, 00:26:51.576 { 00:26:51.576 "name": null, 00:26:51.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:51.576 "is_configured": false, 00:26:51.576 "data_offset": 2048, 00:26:51.576 "data_size": 63488 00:26:51.576 }, 00:26:51.576 { 00:26:51.576 "name": null, 00:26:51.576 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:51.576 "is_configured": false, 00:26:51.576 "data_offset": 2048, 00:26:51.576 "data_size": 63488 00:26:51.576 } 00:26:51.576 ] 00:26:51.576 }' 00:26:51.576 11:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.576 11:38:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.143 11:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:26:52.143 11:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:52.143 11:38:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:52.402 [2024-07-13 11:38:27.000515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:52.402 [2024-07-13 11:38:27.000732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:52.402 [2024-07-13 11:38:27.000800] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:52.402 [2024-07-13 11:38:27.001129] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:52.402 [2024-07-13 11:38:27.001665] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:52.402 [2024-07-13 11:38:27.001825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:52.402 [2024-07-13 11:38:27.002022] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:52.402 [2024-07-13 11:38:27.002142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:52.402 pt2 00:26:52.402 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:52.402 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:52.402 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:52.660 [2024-07-13 11:38:27.192537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:52.660 [2024-07-13 11:38:27.192727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:52.660 [2024-07-13 11:38:27.192784] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:52.660 [2024-07-13 11:38:27.192930] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:52.660 [2024-07-13 11:38:27.193336] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:52.660 [2024-07-13 11:38:27.193476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:52.660 [2024-07-13 11:38:27.193646] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:52.660 [2024-07-13 11:38:27.193745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:52.660 pt3 00:26:52.660 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:52.660 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:52.660 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:52.660 [2024-07-13 11:38:27.384564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:52.660 [2024-07-13 11:38:27.384729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:52.660 [2024-07-13 11:38:27.384784] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:26:52.660 [2024-07-13 11:38:27.384938] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:52.660 [2024-07-13 11:38:27.385346] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:52.660 [2024-07-13 11:38:27.385503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:52.660 [2024-07-13 11:38:27.385681] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:52.660 [2024-07-13 11:38:27.385780] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:52.660 [2024-07-13 11:38:27.385978] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:26:52.660 [2024-07-13 11:38:27.386073] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:52.660 [2024-07-13 11:38:27.386282] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:26:52.660 [2024-07-13 11:38:27.386700] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:26:52.660 [2024-07-13 11:38:27.386758] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:26:52.660 [2024-07-13 11:38:27.386951] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:52.660 pt4 00:26:52.660 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:52.660 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.661 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.919 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.919 "name": "raid_bdev1", 00:26:52.919 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:52.919 "strip_size_kb": 0, 00:26:52.919 "state": "online", 00:26:52.919 "raid_level": "raid1", 00:26:52.919 "superblock": true, 00:26:52.919 "num_base_bdevs": 4, 00:26:52.919 "num_base_bdevs_discovered": 4, 00:26:52.919 "num_base_bdevs_operational": 4, 00:26:52.919 "base_bdevs_list": [ 00:26:52.919 { 00:26:52.919 "name": "pt1", 00:26:52.919 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:52.919 "is_configured": true, 00:26:52.919 "data_offset": 2048, 00:26:52.919 "data_size": 63488 00:26:52.919 }, 00:26:52.919 { 00:26:52.919 "name": "pt2", 00:26:52.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:52.919 "is_configured": true, 00:26:52.919 "data_offset": 2048, 00:26:52.919 "data_size": 63488 00:26:52.919 }, 00:26:52.919 { 00:26:52.919 "name": "pt3", 00:26:52.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:52.919 "is_configured": true, 00:26:52.919 "data_offset": 2048, 00:26:52.919 "data_size": 63488 00:26:52.919 }, 00:26:52.919 { 00:26:52.919 "name": "pt4", 00:26:52.919 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:52.919 "is_configured": true, 00:26:52.919 "data_offset": 2048, 00:26:52.919 "data_size": 63488 00:26:52.919 } 00:26:52.919 ] 00:26:52.919 }' 00:26:52.919 11:38:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.919 11:38:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.486 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:26:53.486 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:53.486 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:53.486 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:53.486 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:53.486 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:53.486 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:53.486 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:53.745 [2024-07-13 11:38:28.441046] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:53.745 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:53.745 "name": "raid_bdev1", 00:26:53.745 "aliases": [ 00:26:53.745 "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89" 00:26:53.745 ], 00:26:53.745 "product_name": "Raid Volume", 00:26:53.745 "block_size": 512, 00:26:53.745 "num_blocks": 63488, 00:26:53.745 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:53.745 "assigned_rate_limits": { 00:26:53.745 "rw_ios_per_sec": 0, 00:26:53.745 "rw_mbytes_per_sec": 0, 00:26:53.745 "r_mbytes_per_sec": 0, 00:26:53.745 "w_mbytes_per_sec": 0 00:26:53.745 }, 00:26:53.745 "claimed": false, 00:26:53.745 "zoned": false, 00:26:53.745 "supported_io_types": { 00:26:53.745 "read": true, 00:26:53.745 "write": true, 00:26:53.745 "unmap": false, 00:26:53.745 "flush": false, 00:26:53.745 "reset": true, 00:26:53.745 "nvme_admin": false, 00:26:53.745 "nvme_io": false, 00:26:53.745 "nvme_io_md": false, 00:26:53.745 "write_zeroes": true, 00:26:53.745 "zcopy": false, 00:26:53.745 "get_zone_info": false, 00:26:53.745 "zone_management": false, 00:26:53.745 "zone_append": false, 00:26:53.745 "compare": false, 00:26:53.745 "compare_and_write": false, 00:26:53.745 "abort": false, 00:26:53.745 "seek_hole": false, 00:26:53.745 "seek_data": false, 00:26:53.745 "copy": false, 00:26:53.745 "nvme_iov_md": false 00:26:53.745 }, 00:26:53.745 "memory_domains": [ 00:26:53.745 { 00:26:53.745 "dma_device_id": "system", 00:26:53.745 "dma_device_type": 1 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.745 "dma_device_type": 2 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "dma_device_id": "system", 00:26:53.745 "dma_device_type": 1 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.745 "dma_device_type": 2 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "dma_device_id": "system", 00:26:53.745 "dma_device_type": 1 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.745 "dma_device_type": 2 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "dma_device_id": "system", 00:26:53.745 "dma_device_type": 1 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:53.745 "dma_device_type": 2 00:26:53.745 } 00:26:53.745 ], 00:26:53.745 "driver_specific": { 00:26:53.745 "raid": { 00:26:53.745 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:53.745 "strip_size_kb": 0, 00:26:53.745 "state": "online", 00:26:53.745 "raid_level": "raid1", 00:26:53.745 "superblock": true, 00:26:53.745 "num_base_bdevs": 4, 00:26:53.745 "num_base_bdevs_discovered": 4, 00:26:53.745 "num_base_bdevs_operational": 4, 00:26:53.745 "base_bdevs_list": [ 00:26:53.745 { 00:26:53.745 "name": "pt1", 00:26:53.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:53.745 "is_configured": true, 00:26:53.745 "data_offset": 2048, 00:26:53.745 "data_size": 63488 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "name": "pt2", 00:26:53.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:53.745 "is_configured": true, 00:26:53.745 "data_offset": 2048, 00:26:53.745 "data_size": 63488 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "name": "pt3", 00:26:53.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:53.745 "is_configured": true, 00:26:53.745 "data_offset": 2048, 00:26:53.745 "data_size": 63488 00:26:53.745 }, 00:26:53.745 { 00:26:53.745 "name": "pt4", 00:26:53.745 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:53.745 "is_configured": true, 00:26:53.745 "data_offset": 2048, 00:26:53.745 "data_size": 63488 00:26:53.745 } 00:26:53.745 ] 00:26:53.745 } 00:26:53.745 } 00:26:53.745 }' 00:26:53.745 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:54.005 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:54.005 pt2 00:26:54.005 pt3 00:26:54.005 pt4' 00:26:54.005 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:54.005 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:54.005 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:54.005 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:54.005 "name": "pt1", 00:26:54.005 "aliases": [ 00:26:54.005 "00000000-0000-0000-0000-000000000001" 00:26:54.005 ], 00:26:54.005 "product_name": "passthru", 00:26:54.005 "block_size": 512, 00:26:54.005 "num_blocks": 65536, 00:26:54.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:54.005 "assigned_rate_limits": { 00:26:54.005 "rw_ios_per_sec": 0, 00:26:54.005 "rw_mbytes_per_sec": 0, 00:26:54.005 "r_mbytes_per_sec": 0, 00:26:54.005 "w_mbytes_per_sec": 0 00:26:54.005 }, 00:26:54.005 "claimed": true, 00:26:54.005 "claim_type": "exclusive_write", 00:26:54.005 "zoned": false, 00:26:54.005 "supported_io_types": { 00:26:54.005 "read": true, 00:26:54.005 "write": true, 00:26:54.005 "unmap": true, 00:26:54.005 "flush": true, 00:26:54.005 "reset": true, 00:26:54.005 "nvme_admin": false, 00:26:54.005 "nvme_io": false, 00:26:54.005 "nvme_io_md": false, 00:26:54.005 "write_zeroes": true, 00:26:54.005 "zcopy": true, 00:26:54.005 "get_zone_info": false, 00:26:54.005 "zone_management": false, 00:26:54.005 "zone_append": false, 00:26:54.005 "compare": false, 00:26:54.005 "compare_and_write": false, 00:26:54.005 "abort": true, 00:26:54.005 "seek_hole": false, 00:26:54.005 "seek_data": false, 00:26:54.005 "copy": true, 00:26:54.005 "nvme_iov_md": false 00:26:54.005 }, 00:26:54.005 "memory_domains": [ 00:26:54.005 { 00:26:54.005 "dma_device_id": "system", 00:26:54.005 "dma_device_type": 1 00:26:54.005 }, 00:26:54.005 { 00:26:54.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.005 "dma_device_type": 2 00:26:54.005 } 00:26:54.005 ], 00:26:54.005 "driver_specific": { 00:26:54.005 "passthru": { 00:26:54.005 "name": "pt1", 00:26:54.005 "base_bdev_name": "malloc1" 00:26:54.005 } 00:26:54.005 } 00:26:54.005 }' 00:26:54.005 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:54.005 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:54.264 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:54.264 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.264 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.264 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:54.264 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:54.264 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:54.264 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:54.264 11:38:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:54.522 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:54.522 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:54.522 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:54.522 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:54.522 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:54.781 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:54.781 "name": "pt2", 00:26:54.781 "aliases": [ 00:26:54.781 "00000000-0000-0000-0000-000000000002" 00:26:54.781 ], 00:26:54.781 "product_name": "passthru", 00:26:54.781 "block_size": 512, 00:26:54.781 "num_blocks": 65536, 00:26:54.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:54.781 "assigned_rate_limits": { 00:26:54.781 "rw_ios_per_sec": 0, 00:26:54.781 "rw_mbytes_per_sec": 0, 00:26:54.781 "r_mbytes_per_sec": 0, 00:26:54.781 "w_mbytes_per_sec": 0 00:26:54.781 }, 00:26:54.781 "claimed": true, 00:26:54.781 "claim_type": "exclusive_write", 00:26:54.781 "zoned": false, 00:26:54.781 "supported_io_types": { 00:26:54.781 "read": true, 00:26:54.781 "write": true, 00:26:54.781 "unmap": true, 00:26:54.781 "flush": true, 00:26:54.781 "reset": true, 00:26:54.781 "nvme_admin": false, 00:26:54.781 "nvme_io": false, 00:26:54.781 "nvme_io_md": false, 00:26:54.781 "write_zeroes": true, 00:26:54.781 "zcopy": true, 00:26:54.781 "get_zone_info": false, 00:26:54.781 "zone_management": false, 00:26:54.781 "zone_append": false, 00:26:54.781 "compare": false, 00:26:54.781 "compare_and_write": false, 00:26:54.781 "abort": true, 00:26:54.781 "seek_hole": false, 00:26:54.781 "seek_data": false, 00:26:54.781 "copy": true, 00:26:54.781 "nvme_iov_md": false 00:26:54.781 }, 00:26:54.781 "memory_domains": [ 00:26:54.781 { 00:26:54.781 "dma_device_id": "system", 00:26:54.781 "dma_device_type": 1 00:26:54.781 }, 00:26:54.781 { 00:26:54.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:54.781 "dma_device_type": 2 00:26:54.781 } 00:26:54.781 ], 00:26:54.781 "driver_specific": { 00:26:54.781 "passthru": { 00:26:54.781 "name": "pt2", 00:26:54.781 "base_bdev_name": "malloc2" 00:26:54.781 } 00:26:54.781 } 00:26:54.781 }' 00:26:54.781 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:54.781 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:54.781 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:54.781 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.781 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:54.781 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:54.781 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.040 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.040 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:55.040 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.040 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.040 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:55.040 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:55.040 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:55.040 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:55.299 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:55.299 "name": "pt3", 00:26:55.299 "aliases": [ 00:26:55.299 "00000000-0000-0000-0000-000000000003" 00:26:55.299 ], 00:26:55.299 "product_name": "passthru", 00:26:55.299 "block_size": 512, 00:26:55.299 "num_blocks": 65536, 00:26:55.299 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:55.299 "assigned_rate_limits": { 00:26:55.299 "rw_ios_per_sec": 0, 00:26:55.299 "rw_mbytes_per_sec": 0, 00:26:55.299 "r_mbytes_per_sec": 0, 00:26:55.299 "w_mbytes_per_sec": 0 00:26:55.299 }, 00:26:55.299 "claimed": true, 00:26:55.299 "claim_type": "exclusive_write", 00:26:55.299 "zoned": false, 00:26:55.299 "supported_io_types": { 00:26:55.299 "read": true, 00:26:55.299 "write": true, 00:26:55.299 "unmap": true, 00:26:55.299 "flush": true, 00:26:55.299 "reset": true, 00:26:55.299 "nvme_admin": false, 00:26:55.299 "nvme_io": false, 00:26:55.299 "nvme_io_md": false, 00:26:55.299 "write_zeroes": true, 00:26:55.299 "zcopy": true, 00:26:55.299 "get_zone_info": false, 00:26:55.299 "zone_management": false, 00:26:55.299 "zone_append": false, 00:26:55.299 "compare": false, 00:26:55.299 "compare_and_write": false, 00:26:55.299 "abort": true, 00:26:55.299 "seek_hole": false, 00:26:55.299 "seek_data": false, 00:26:55.299 "copy": true, 00:26:55.299 "nvme_iov_md": false 00:26:55.299 }, 00:26:55.299 "memory_domains": [ 00:26:55.300 { 00:26:55.300 "dma_device_id": "system", 00:26:55.300 "dma_device_type": 1 00:26:55.300 }, 00:26:55.300 { 00:26:55.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:55.300 "dma_device_type": 2 00:26:55.300 } 00:26:55.300 ], 00:26:55.300 "driver_specific": { 00:26:55.300 "passthru": { 00:26:55.300 "name": "pt3", 00:26:55.300 "base_bdev_name": "malloc3" 00:26:55.300 } 00:26:55.300 } 00:26:55.300 }' 00:26:55.300 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.300 11:38:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:55.300 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:55.300 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.558 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:55.558 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:55.558 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.558 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:55.558 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:55.558 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.816 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:55.816 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:55.816 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:55.816 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:55.816 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:56.075 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:56.075 "name": "pt4", 00:26:56.075 "aliases": [ 00:26:56.075 "00000000-0000-0000-0000-000000000004" 00:26:56.075 ], 00:26:56.075 "product_name": "passthru", 00:26:56.075 "block_size": 512, 00:26:56.075 "num_blocks": 65536, 00:26:56.075 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:56.075 "assigned_rate_limits": { 00:26:56.075 "rw_ios_per_sec": 0, 00:26:56.075 "rw_mbytes_per_sec": 0, 00:26:56.075 "r_mbytes_per_sec": 0, 00:26:56.075 "w_mbytes_per_sec": 0 00:26:56.075 }, 00:26:56.075 "claimed": true, 00:26:56.075 "claim_type": "exclusive_write", 00:26:56.075 "zoned": false, 00:26:56.075 "supported_io_types": { 00:26:56.075 "read": true, 00:26:56.075 "write": true, 00:26:56.075 "unmap": true, 00:26:56.075 "flush": true, 00:26:56.075 "reset": true, 00:26:56.075 "nvme_admin": false, 00:26:56.075 "nvme_io": false, 00:26:56.075 "nvme_io_md": false, 00:26:56.075 "write_zeroes": true, 00:26:56.075 "zcopy": true, 00:26:56.075 "get_zone_info": false, 00:26:56.075 "zone_management": false, 00:26:56.075 "zone_append": false, 00:26:56.075 "compare": false, 00:26:56.075 "compare_and_write": false, 00:26:56.075 "abort": true, 00:26:56.075 "seek_hole": false, 00:26:56.075 "seek_data": false, 00:26:56.075 "copy": true, 00:26:56.075 "nvme_iov_md": false 00:26:56.075 }, 00:26:56.075 "memory_domains": [ 00:26:56.075 { 00:26:56.075 "dma_device_id": "system", 00:26:56.075 "dma_device_type": 1 00:26:56.075 }, 00:26:56.075 { 00:26:56.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.075 "dma_device_type": 2 00:26:56.075 } 00:26:56.075 ], 00:26:56.075 "driver_specific": { 00:26:56.075 "passthru": { 00:26:56.075 "name": "pt4", 00:26:56.075 "base_bdev_name": "malloc4" 00:26:56.075 } 00:26:56.075 } 00:26:56.075 }' 00:26:56.075 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.075 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:56.075 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:56.075 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.075 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:56.333 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:56.334 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:56.334 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:56.334 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:56.334 11:38:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:56.334 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:56.334 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:56.334 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:56.334 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:56.592 [2024-07-13 11:38:31.306161] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:56.592 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f7c3b131-a95d-4b5a-85cf-ea54e2cefc89 '!=' f7c3b131-a95d-4b5a-85cf-ea54e2cefc89 ']' 00:26:56.592 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:26:56.592 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:56.592 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:56.592 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:56.851 [2024-07-13 11:38:31.578024] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:56.851 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:56.852 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.110 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:57.110 "name": "raid_bdev1", 00:26:57.110 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:57.110 "strip_size_kb": 0, 00:26:57.110 "state": "online", 00:26:57.110 "raid_level": "raid1", 00:26:57.110 "superblock": true, 00:26:57.110 "num_base_bdevs": 4, 00:26:57.110 "num_base_bdevs_discovered": 3, 00:26:57.110 "num_base_bdevs_operational": 3, 00:26:57.110 "base_bdevs_list": [ 00:26:57.110 { 00:26:57.110 "name": null, 00:26:57.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:57.110 "is_configured": false, 00:26:57.110 "data_offset": 2048, 00:26:57.110 "data_size": 63488 00:26:57.110 }, 00:26:57.110 { 00:26:57.110 "name": "pt2", 00:26:57.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:57.110 "is_configured": true, 00:26:57.110 "data_offset": 2048, 00:26:57.110 "data_size": 63488 00:26:57.110 }, 00:26:57.110 { 00:26:57.110 "name": "pt3", 00:26:57.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:57.110 "is_configured": true, 00:26:57.110 "data_offset": 2048, 00:26:57.110 "data_size": 63488 00:26:57.110 }, 00:26:57.110 { 00:26:57.110 "name": "pt4", 00:26:57.110 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:57.110 "is_configured": true, 00:26:57.110 "data_offset": 2048, 00:26:57.110 "data_size": 63488 00:26:57.110 } 00:26:57.110 ] 00:26:57.110 }' 00:26:57.110 11:38:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:57.110 11:38:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.046 11:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:58.046 [2024-07-13 11:38:32.658200] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:58.046 [2024-07-13 11:38:32.658354] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:58.046 [2024-07-13 11:38:32.658491] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:58.046 [2024-07-13 11:38:32.658640] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:58.046 [2024-07-13 11:38:32.658749] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:26:58.046 11:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:58.046 11:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:26:58.314 11:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:26:58.314 11:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:26:58.314 11:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:26:58.314 11:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:58.314 11:38:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:58.596 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:58.596 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:58.596 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:58.860 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:58.860 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:58.860 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:58.860 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:26:58.860 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:26:58.860 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:26:58.860 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:58.860 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:59.123 [2024-07-13 11:38:33.710368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:59.123 [2024-07-13 11:38:33.710569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.123 [2024-07-13 11:38:33.710628] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:26:59.123 [2024-07-13 11:38:33.710976] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.123 [2024-07-13 11:38:33.713090] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.123 [2024-07-13 11:38:33.713239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:59.123 [2024-07-13 11:38:33.713431] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:59.123 [2024-07-13 11:38:33.713592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:59.123 pt2 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.123 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.381 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.381 "name": "raid_bdev1", 00:26:59.381 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:26:59.381 "strip_size_kb": 0, 00:26:59.381 "state": "configuring", 00:26:59.381 "raid_level": "raid1", 00:26:59.381 "superblock": true, 00:26:59.381 "num_base_bdevs": 4, 00:26:59.381 "num_base_bdevs_discovered": 1, 00:26:59.381 "num_base_bdevs_operational": 3, 00:26:59.381 "base_bdevs_list": [ 00:26:59.381 { 00:26:59.381 "name": null, 00:26:59.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.381 "is_configured": false, 00:26:59.381 "data_offset": 2048, 00:26:59.381 "data_size": 63488 00:26:59.381 }, 00:26:59.381 { 00:26:59.381 "name": "pt2", 00:26:59.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:59.381 "is_configured": true, 00:26:59.381 "data_offset": 2048, 00:26:59.381 "data_size": 63488 00:26:59.381 }, 00:26:59.381 { 00:26:59.381 "name": null, 00:26:59.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:59.381 "is_configured": false, 00:26:59.381 "data_offset": 2048, 00:26:59.381 "data_size": 63488 00:26:59.381 }, 00:26:59.381 { 00:26:59.381 "name": null, 00:26:59.381 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:59.381 "is_configured": false, 00:26:59.381 "data_offset": 2048, 00:26:59.381 "data_size": 63488 00:26:59.381 } 00:26:59.381 ] 00:26:59.381 }' 00:26:59.381 11:38:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.381 11:38:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.947 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:26:59.948 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:26:59.948 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:00.210 [2024-07-13 11:38:34.818673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:00.210 [2024-07-13 11:38:34.818935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:00.210 [2024-07-13 11:38:34.819080] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:27:00.210 [2024-07-13 11:38:34.819209] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:00.210 [2024-07-13 11:38:34.819857] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:00.210 [2024-07-13 11:38:34.820010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:00.210 [2024-07-13 11:38:34.820214] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:00.210 [2024-07-13 11:38:34.820339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:00.210 pt3 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.210 11:38:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.471 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:00.471 "name": "raid_bdev1", 00:27:00.471 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:27:00.471 "strip_size_kb": 0, 00:27:00.471 "state": "configuring", 00:27:00.471 "raid_level": "raid1", 00:27:00.471 "superblock": true, 00:27:00.471 "num_base_bdevs": 4, 00:27:00.471 "num_base_bdevs_discovered": 2, 00:27:00.471 "num_base_bdevs_operational": 3, 00:27:00.471 "base_bdevs_list": [ 00:27:00.471 { 00:27:00.471 "name": null, 00:27:00.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.471 "is_configured": false, 00:27:00.471 "data_offset": 2048, 00:27:00.471 "data_size": 63488 00:27:00.471 }, 00:27:00.471 { 00:27:00.471 "name": "pt2", 00:27:00.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:00.471 "is_configured": true, 00:27:00.471 "data_offset": 2048, 00:27:00.471 "data_size": 63488 00:27:00.471 }, 00:27:00.471 { 00:27:00.471 "name": "pt3", 00:27:00.471 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:00.471 "is_configured": true, 00:27:00.471 "data_offset": 2048, 00:27:00.471 "data_size": 63488 00:27:00.471 }, 00:27:00.471 { 00:27:00.471 "name": null, 00:27:00.471 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:00.471 "is_configured": false, 00:27:00.471 "data_offset": 2048, 00:27:00.471 "data_size": 63488 00:27:00.471 } 00:27:00.471 ] 00:27:00.471 }' 00:27:00.471 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:00.471 11:38:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.037 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:27:01.037 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:01.037 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:27:01.037 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:01.295 [2024-07-13 11:38:35.978923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:01.295 [2024-07-13 11:38:35.980492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:01.295 [2024-07-13 11:38:35.980782] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:27:01.295 [2024-07-13 11:38:35.981012] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:01.295 [2024-07-13 11:38:35.982145] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:01.295 [2024-07-13 11:38:35.982459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:01.295 [2024-07-13 11:38:35.982943] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:01.295 [2024-07-13 11:38:35.983212] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:01.295 [2024-07-13 11:38:35.983767] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:27:01.295 [2024-07-13 11:38:35.983997] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:01.295 [2024-07-13 11:38:35.984307] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:27:01.295 [2024-07-13 11:38:35.985282] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:27:01.295 [2024-07-13 11:38:35.985523] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:27:01.295 pt4 00:27:01.295 [2024-07-13 11:38:35.986152] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.295 11:38:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.553 11:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.553 "name": "raid_bdev1", 00:27:01.553 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:27:01.553 "strip_size_kb": 0, 00:27:01.553 "state": "online", 00:27:01.553 "raid_level": "raid1", 00:27:01.553 "superblock": true, 00:27:01.553 "num_base_bdevs": 4, 00:27:01.553 "num_base_bdevs_discovered": 3, 00:27:01.553 "num_base_bdevs_operational": 3, 00:27:01.553 "base_bdevs_list": [ 00:27:01.553 { 00:27:01.553 "name": null, 00:27:01.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.553 "is_configured": false, 00:27:01.553 "data_offset": 2048, 00:27:01.553 "data_size": 63488 00:27:01.553 }, 00:27:01.553 { 00:27:01.553 "name": "pt2", 00:27:01.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:01.553 "is_configured": true, 00:27:01.553 "data_offset": 2048, 00:27:01.553 "data_size": 63488 00:27:01.553 }, 00:27:01.553 { 00:27:01.553 "name": "pt3", 00:27:01.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:01.553 "is_configured": true, 00:27:01.553 "data_offset": 2048, 00:27:01.553 "data_size": 63488 00:27:01.553 }, 00:27:01.553 { 00:27:01.553 "name": "pt4", 00:27:01.553 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:01.553 "is_configured": true, 00:27:01.553 "data_offset": 2048, 00:27:01.553 "data_size": 63488 00:27:01.553 } 00:27:01.553 ] 00:27:01.553 }' 00:27:01.553 11:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.553 11:38:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.487 11:38:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:02.487 [2024-07-13 11:38:37.181193] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.487 [2024-07-13 11:38:37.181341] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.487 [2024-07-13 11:38:37.181527] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.487 [2024-07-13 11:38:37.181734] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.487 [2024-07-13 11:38:37.181844] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:27:02.487 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.487 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:02.744 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:02.744 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:02.744 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:27:02.744 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:27:02.744 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:03.002 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:03.258 [2024-07-13 11:38:37.839006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:03.258 [2024-07-13 11:38:37.839227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:03.258 [2024-07-13 11:38:37.839297] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:27:03.258 [2024-07-13 11:38:37.839610] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:03.258 [2024-07-13 11:38:37.842017] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:03.258 [2024-07-13 11:38:37.842203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:03.258 [2024-07-13 11:38:37.842403] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:03.258 [2024-07-13 11:38:37.842562] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:03.258 [2024-07-13 11:38:37.842804] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:03.258 [2024-07-13 11:38:37.842955] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:03.258 [2024-07-13 11:38:37.843070] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state configuring 00:27:03.258 [2024-07-13 11:38:37.843214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:03.258 [2024-07-13 11:38:37.843453] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:03.258 pt1 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:03.258 11:38:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:03.515 11:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:03.515 "name": "raid_bdev1", 00:27:03.515 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:27:03.515 "strip_size_kb": 0, 00:27:03.515 "state": "configuring", 00:27:03.515 "raid_level": "raid1", 00:27:03.515 "superblock": true, 00:27:03.515 "num_base_bdevs": 4, 00:27:03.515 "num_base_bdevs_discovered": 2, 00:27:03.515 "num_base_bdevs_operational": 3, 00:27:03.515 "base_bdevs_list": [ 00:27:03.515 { 00:27:03.515 "name": null, 00:27:03.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:03.515 "is_configured": false, 00:27:03.515 "data_offset": 2048, 00:27:03.515 "data_size": 63488 00:27:03.515 }, 00:27:03.515 { 00:27:03.515 "name": "pt2", 00:27:03.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:03.515 "is_configured": true, 00:27:03.515 "data_offset": 2048, 00:27:03.515 "data_size": 63488 00:27:03.515 }, 00:27:03.515 { 00:27:03.515 "name": "pt3", 00:27:03.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:03.515 "is_configured": true, 00:27:03.515 "data_offset": 2048, 00:27:03.515 "data_size": 63488 00:27:03.515 }, 00:27:03.515 { 00:27:03.515 "name": null, 00:27:03.515 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:03.515 "is_configured": false, 00:27:03.515 "data_offset": 2048, 00:27:03.515 "data_size": 63488 00:27:03.515 } 00:27:03.515 ] 00:27:03.515 }' 00:27:03.515 11:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:03.515 11:38:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.079 11:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:27:04.079 11:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:04.336 11:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:27:04.336 11:38:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:04.593 [2024-07-13 11:38:39.203786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:04.593 [2024-07-13 11:38:39.203994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.593 [2024-07-13 11:38:39.204056] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:27:04.593 [2024-07-13 11:38:39.204196] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.593 [2024-07-13 11:38:39.204640] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.593 [2024-07-13 11:38:39.204797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:04.593 [2024-07-13 11:38:39.205139] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:04.593 [2024-07-13 11:38:39.205195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:04.593 [2024-07-13 11:38:39.205510] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:27:04.593 [2024-07-13 11:38:39.205789] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:04.593 [2024-07-13 11:38:39.206006] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:27:04.593 [2024-07-13 11:38:39.206526] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:27:04.593 [2024-07-13 11:38:39.206666] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:27:04.593 [2024-07-13 11:38:39.206936] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:04.593 pt4 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.593 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.850 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.850 "name": "raid_bdev1", 00:27:04.850 "uuid": "f7c3b131-a95d-4b5a-85cf-ea54e2cefc89", 00:27:04.850 "strip_size_kb": 0, 00:27:04.850 "state": "online", 00:27:04.850 "raid_level": "raid1", 00:27:04.850 "superblock": true, 00:27:04.850 "num_base_bdevs": 4, 00:27:04.850 "num_base_bdevs_discovered": 3, 00:27:04.850 "num_base_bdevs_operational": 3, 00:27:04.850 "base_bdevs_list": [ 00:27:04.850 { 00:27:04.850 "name": null, 00:27:04.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.850 "is_configured": false, 00:27:04.850 "data_offset": 2048, 00:27:04.850 "data_size": 63488 00:27:04.850 }, 00:27:04.850 { 00:27:04.850 "name": "pt2", 00:27:04.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.850 "is_configured": true, 00:27:04.850 "data_offset": 2048, 00:27:04.850 "data_size": 63488 00:27:04.850 }, 00:27:04.850 { 00:27:04.850 "name": "pt3", 00:27:04.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:04.850 "is_configured": true, 00:27:04.850 "data_offset": 2048, 00:27:04.850 "data_size": 63488 00:27:04.850 }, 00:27:04.850 { 00:27:04.850 "name": "pt4", 00:27:04.850 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:04.850 "is_configured": true, 00:27:04.850 "data_offset": 2048, 00:27:04.850 "data_size": 63488 00:27:04.850 } 00:27:04.850 ] 00:27:04.850 }' 00:27:04.850 11:38:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.850 11:38:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.413 11:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:05.413 11:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:05.671 11:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:05.671 11:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:05.671 11:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:05.929 [2024-07-13 11:38:40.635008] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' f7c3b131-a95d-4b5a-85cf-ea54e2cefc89 '!=' f7c3b131-a95d-4b5a-85cf-ea54e2cefc89 ']' 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 143395 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 143395 ']' 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 143395 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143395 00:27:05.929 killing process with pid 143395 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143395' 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 143395 00:27:05.929 11:38:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 143395 00:27:05.929 [2024-07-13 11:38:40.670819] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:05.929 [2024-07-13 11:38:40.670917] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:05.929 [2024-07-13 11:38:40.671031] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:05.929 [2024-07-13 11:38:40.671045] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:27:06.187 [2024-07-13 11:38:40.923405] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:07.120 ************************************ 00:27:07.120 END TEST raid_superblock_test 00:27:07.120 ************************************ 00:27:07.120 11:38:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:27:07.120 00:27:07.120 real 0m26.519s 00:27:07.120 user 0m49.934s 00:27:07.120 sys 0m2.901s 00:27:07.120 11:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:07.120 11:38:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.379 11:38:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:07.379 11:38:41 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:27:07.379 11:38:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:07.379 11:38:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:07.379 11:38:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:07.379 ************************************ 00:27:07.379 START TEST raid_read_error_test 00:27:07.379 ************************************ 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.szx6GeCqKs 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=144311 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 144311 /var/tmp/spdk-raid.sock 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 144311 ']' 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:07.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:07.379 11:38:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:07.379 [2024-07-13 11:38:41.984366] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:07.379 [2024-07-13 11:38:41.984591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144311 ] 00:27:07.637 [2024-07-13 11:38:42.157445] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.637 [2024-07-13 11:38:42.360644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.895 [2024-07-13 11:38:42.523211] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:08.153 11:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.153 11:38:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:27:08.153 11:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:08.153 11:38:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:08.411 BaseBdev1_malloc 00:27:08.411 11:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:08.669 true 00:27:08.669 11:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:08.927 [2024-07-13 11:38:43.488401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:08.927 [2024-07-13 11:38:43.488486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.927 [2024-07-13 11:38:43.488519] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:08.927 [2024-07-13 11:38:43.488538] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.927 [2024-07-13 11:38:43.490309] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.927 [2024-07-13 11:38:43.490356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:08.927 BaseBdev1 00:27:08.927 11:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:08.927 11:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:09.185 BaseBdev2_malloc 00:27:09.185 11:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:09.185 true 00:27:09.185 11:38:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:09.443 [2024-07-13 11:38:44.093443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:09.443 [2024-07-13 11:38:44.093526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:09.444 [2024-07-13 11:38:44.093562] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:09.444 [2024-07-13 11:38:44.093582] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:09.444 [2024-07-13 11:38:44.095629] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:09.444 [2024-07-13 11:38:44.095675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:09.444 BaseBdev2 00:27:09.444 11:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:09.444 11:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:09.702 BaseBdev3_malloc 00:27:09.702 11:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:09.960 true 00:27:09.960 11:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:10.219 [2024-07-13 11:38:44.851548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:10.219 [2024-07-13 11:38:44.851628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.219 [2024-07-13 11:38:44.851659] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:10.219 [2024-07-13 11:38:44.851682] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.219 [2024-07-13 11:38:44.853443] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.219 [2024-07-13 11:38:44.853491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:10.219 BaseBdev3 00:27:10.219 11:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:10.219 11:38:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:10.478 BaseBdev4_malloc 00:27:10.478 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:10.737 true 00:27:10.737 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:10.737 [2024-07-13 11:38:45.464267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:10.737 [2024-07-13 11:38:45.464352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.737 [2024-07-13 11:38:45.464385] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:10.737 [2024-07-13 11:38:45.464410] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.737 [2024-07-13 11:38:45.466480] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.737 [2024-07-13 11:38:45.466546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:10.737 BaseBdev4 00:27:10.737 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:10.995 [2024-07-13 11:38:45.656335] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:10.995 [2024-07-13 11:38:45.658167] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:10.995 [2024-07-13 11:38:45.658268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:10.995 [2024-07-13 11:38:45.658336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:10.995 [2024-07-13 11:38:45.658616] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:27:10.995 [2024-07-13 11:38:45.658631] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:10.995 [2024-07-13 11:38:45.658750] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:10.995 [2024-07-13 11:38:45.659149] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:27:10.995 [2024-07-13 11:38:45.659165] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:27:10.995 [2024-07-13 11:38:45.659305] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.995 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:10.995 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:10.995 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:10.995 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:10.995 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:10.995 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:10.995 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:10.995 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:10.996 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:10.996 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:10.996 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:10.996 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.254 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.254 "name": "raid_bdev1", 00:27:11.254 "uuid": "b34db998-62e7-4ce3-96b9-85fd2069526f", 00:27:11.254 "strip_size_kb": 0, 00:27:11.254 "state": "online", 00:27:11.254 "raid_level": "raid1", 00:27:11.254 "superblock": true, 00:27:11.254 "num_base_bdevs": 4, 00:27:11.254 "num_base_bdevs_discovered": 4, 00:27:11.254 "num_base_bdevs_operational": 4, 00:27:11.254 "base_bdevs_list": [ 00:27:11.254 { 00:27:11.254 "name": "BaseBdev1", 00:27:11.254 "uuid": "cbb92407-9335-5e07-89f4-6eaaaf605656", 00:27:11.254 "is_configured": true, 00:27:11.254 "data_offset": 2048, 00:27:11.254 "data_size": 63488 00:27:11.254 }, 00:27:11.254 { 00:27:11.254 "name": "BaseBdev2", 00:27:11.254 "uuid": "d3df089e-7a23-5c88-b95c-7d94fb2c89bf", 00:27:11.254 "is_configured": true, 00:27:11.254 "data_offset": 2048, 00:27:11.254 "data_size": 63488 00:27:11.254 }, 00:27:11.254 { 00:27:11.254 "name": "BaseBdev3", 00:27:11.254 "uuid": "63337b35-c764-5dec-a162-82e50d9c846b", 00:27:11.254 "is_configured": true, 00:27:11.254 "data_offset": 2048, 00:27:11.254 "data_size": 63488 00:27:11.254 }, 00:27:11.254 { 00:27:11.254 "name": "BaseBdev4", 00:27:11.254 "uuid": "14715293-c71a-57a3-8603-dc82d64f9a60", 00:27:11.254 "is_configured": true, 00:27:11.254 "data_offset": 2048, 00:27:11.254 "data_size": 63488 00:27:11.254 } 00:27:11.254 ] 00:27:11.254 }' 00:27:11.254 11:38:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.254 11:38:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.189 11:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:12.189 11:38:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:12.189 [2024-07-13 11:38:46.679402] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:13.126 11:38:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.385 11:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:13.385 "name": "raid_bdev1", 00:27:13.385 "uuid": "b34db998-62e7-4ce3-96b9-85fd2069526f", 00:27:13.385 "strip_size_kb": 0, 00:27:13.385 "state": "online", 00:27:13.385 "raid_level": "raid1", 00:27:13.385 "superblock": true, 00:27:13.385 "num_base_bdevs": 4, 00:27:13.385 "num_base_bdevs_discovered": 4, 00:27:13.385 "num_base_bdevs_operational": 4, 00:27:13.385 "base_bdevs_list": [ 00:27:13.385 { 00:27:13.385 "name": "BaseBdev1", 00:27:13.385 "uuid": "cbb92407-9335-5e07-89f4-6eaaaf605656", 00:27:13.385 "is_configured": true, 00:27:13.385 "data_offset": 2048, 00:27:13.385 "data_size": 63488 00:27:13.385 }, 00:27:13.385 { 00:27:13.385 "name": "BaseBdev2", 00:27:13.385 "uuid": "d3df089e-7a23-5c88-b95c-7d94fb2c89bf", 00:27:13.385 "is_configured": true, 00:27:13.385 "data_offset": 2048, 00:27:13.385 "data_size": 63488 00:27:13.385 }, 00:27:13.385 { 00:27:13.385 "name": "BaseBdev3", 00:27:13.385 "uuid": "63337b35-c764-5dec-a162-82e50d9c846b", 00:27:13.385 "is_configured": true, 00:27:13.385 "data_offset": 2048, 00:27:13.385 "data_size": 63488 00:27:13.385 }, 00:27:13.385 { 00:27:13.385 "name": "BaseBdev4", 00:27:13.385 "uuid": "14715293-c71a-57a3-8603-dc82d64f9a60", 00:27:13.385 "is_configured": true, 00:27:13.385 "data_offset": 2048, 00:27:13.385 "data_size": 63488 00:27:13.385 } 00:27:13.385 ] 00:27:13.385 }' 00:27:13.385 11:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:13.385 11:38:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.322 11:38:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:14.322 [2024-07-13 11:38:49.012783] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:14.322 [2024-07-13 11:38:49.012846] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:14.322 [2024-07-13 11:38:49.015447] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:14.322 [2024-07-13 11:38:49.015498] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:14.322 [2024-07-13 11:38:49.015605] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:14.322 [2024-07-13 11:38:49.015615] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:27:14.322 0 00:27:14.322 11:38:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 144311 00:27:14.322 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 144311 ']' 00:27:14.322 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 144311 00:27:14.322 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:27:14.323 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.323 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144311 00:27:14.323 killing process with pid 144311 00:27:14.323 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:14.323 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:14.323 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144311' 00:27:14.323 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 144311 00:27:14.323 11:38:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 144311 00:27:14.323 [2024-07-13 11:38:49.046786] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:14.597 [2024-07-13 11:38:49.257757] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.szx6GeCqKs 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:15.536 00:27:15.536 real 0m8.327s 00:27:15.536 user 0m13.048s 00:27:15.536 sys 0m0.913s 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:15.536 11:38:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.536 ************************************ 00:27:15.536 END TEST raid_read_error_test 00:27:15.536 ************************************ 00:27:15.536 11:38:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:15.536 11:38:50 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:27:15.536 11:38:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:15.536 11:38:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.536 11:38:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:15.795 ************************************ 00:27:15.795 START TEST raid_write_error_test 00:27:15.795 ************************************ 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.slPosypUkk 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=144537 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 144537 /var/tmp/spdk-raid.sock 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 144537 ']' 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:15.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:15.795 11:38:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.795 [2024-07-13 11:38:50.365137] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:15.795 [2024-07-13 11:38:50.365345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144537 ] 00:27:15.795 [2024-07-13 11:38:50.525957] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.054 [2024-07-13 11:38:50.693602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.314 [2024-07-13 11:38:50.856411] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:16.572 11:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:16.572 11:38:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:27:16.572 11:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:16.572 11:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:16.831 BaseBdev1_malloc 00:27:17.090 11:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:17.090 true 00:27:17.090 11:38:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:17.349 [2024-07-13 11:38:52.033976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:17.349 [2024-07-13 11:38:52.034132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.349 [2024-07-13 11:38:52.034195] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:17.349 [2024-07-13 11:38:52.034227] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.349 [2024-07-13 11:38:52.037587] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.349 [2024-07-13 11:38:52.037651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:17.349 BaseBdev1 00:27:17.349 11:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:17.349 11:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:17.608 BaseBdev2_malloc 00:27:17.608 11:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:17.866 true 00:27:17.867 11:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:18.126 [2024-07-13 11:38:52.687084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:18.126 [2024-07-13 11:38:52.687181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.126 [2024-07-13 11:38:52.687220] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:27:18.126 [2024-07-13 11:38:52.687241] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.126 [2024-07-13 11:38:52.689375] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.126 [2024-07-13 11:38:52.689420] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:18.126 BaseBdev2 00:27:18.126 11:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:18.126 11:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:18.384 BaseBdev3_malloc 00:27:18.384 11:38:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:18.384 true 00:27:18.384 11:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:18.642 [2024-07-13 11:38:53.276713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:18.642 [2024-07-13 11:38:53.276796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:18.642 [2024-07-13 11:38:53.276838] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:18.642 [2024-07-13 11:38:53.276868] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:18.642 [2024-07-13 11:38:53.279137] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:18.642 [2024-07-13 11:38:53.279187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:18.642 BaseBdev3 00:27:18.642 11:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:18.642 11:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:18.900 BaseBdev4_malloc 00:27:18.900 11:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:19.159 true 00:27:19.159 11:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:19.159 [2024-07-13 11:38:53.865543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:19.159 [2024-07-13 11:38:53.865624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:19.159 [2024-07-13 11:38:53.865658] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:19.159 [2024-07-13 11:38:53.865683] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:19.159 [2024-07-13 11:38:53.867878] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:19.159 [2024-07-13 11:38:53.867929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:19.159 BaseBdev4 00:27:19.159 11:38:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:19.417 [2024-07-13 11:38:54.053627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:19.417 [2024-07-13 11:38:54.055548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:19.417 [2024-07-13 11:38:54.055641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:19.417 [2024-07-13 11:38:54.055709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:19.417 [2024-07-13 11:38:54.055951] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:27:19.417 [2024-07-13 11:38:54.055965] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:19.417 [2024-07-13 11:38:54.056084] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:19.417 [2024-07-13 11:38:54.056429] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:27:19.417 [2024-07-13 11:38:54.056451] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:27:19.417 [2024-07-13 11:38:54.056584] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.417 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:19.676 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:19.676 "name": "raid_bdev1", 00:27:19.676 "uuid": "f76cee20-15f1-4031-b39c-217205c1ee56", 00:27:19.676 "strip_size_kb": 0, 00:27:19.676 "state": "online", 00:27:19.676 "raid_level": "raid1", 00:27:19.676 "superblock": true, 00:27:19.676 "num_base_bdevs": 4, 00:27:19.676 "num_base_bdevs_discovered": 4, 00:27:19.676 "num_base_bdevs_operational": 4, 00:27:19.676 "base_bdevs_list": [ 00:27:19.676 { 00:27:19.676 "name": "BaseBdev1", 00:27:19.676 "uuid": "c2c901b1-8af1-5784-a2fb-f038a88bfa62", 00:27:19.676 "is_configured": true, 00:27:19.676 "data_offset": 2048, 00:27:19.676 "data_size": 63488 00:27:19.676 }, 00:27:19.676 { 00:27:19.676 "name": "BaseBdev2", 00:27:19.676 "uuid": "7990b4e3-4a14-5c13-b6c8-77ee75d1c36b", 00:27:19.676 "is_configured": true, 00:27:19.676 "data_offset": 2048, 00:27:19.676 "data_size": 63488 00:27:19.676 }, 00:27:19.676 { 00:27:19.676 "name": "BaseBdev3", 00:27:19.676 "uuid": "d0bd00f2-e278-5aeb-8b88-04c01f2580e3", 00:27:19.676 "is_configured": true, 00:27:19.676 "data_offset": 2048, 00:27:19.676 "data_size": 63488 00:27:19.676 }, 00:27:19.676 { 00:27:19.676 "name": "BaseBdev4", 00:27:19.676 "uuid": "bfa15f05-1124-5f75-b5a6-c57d07be20b9", 00:27:19.676 "is_configured": true, 00:27:19.676 "data_offset": 2048, 00:27:19.676 "data_size": 63488 00:27:19.676 } 00:27:19.676 ] 00:27:19.676 }' 00:27:19.676 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:19.677 11:38:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.243 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:20.243 11:38:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:20.502 [2024-07-13 11:38:55.042872] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:21.439 11:38:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:21.698 [2024-07-13 11:38:56.197831] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:27:21.698 [2024-07-13 11:38:56.197966] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:21.698 [2024-07-13 11:38:56.198271] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005e10 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:21.698 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:21.699 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:21.699 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:21.699 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:21.699 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:21.957 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:21.957 "name": "raid_bdev1", 00:27:21.957 "uuid": "f76cee20-15f1-4031-b39c-217205c1ee56", 00:27:21.957 "strip_size_kb": 0, 00:27:21.957 "state": "online", 00:27:21.957 "raid_level": "raid1", 00:27:21.957 "superblock": true, 00:27:21.957 "num_base_bdevs": 4, 00:27:21.958 "num_base_bdevs_discovered": 3, 00:27:21.958 "num_base_bdevs_operational": 3, 00:27:21.958 "base_bdevs_list": [ 00:27:21.958 { 00:27:21.958 "name": null, 00:27:21.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:21.958 "is_configured": false, 00:27:21.958 "data_offset": 2048, 00:27:21.958 "data_size": 63488 00:27:21.958 }, 00:27:21.958 { 00:27:21.958 "name": "BaseBdev2", 00:27:21.958 "uuid": "7990b4e3-4a14-5c13-b6c8-77ee75d1c36b", 00:27:21.958 "is_configured": true, 00:27:21.958 "data_offset": 2048, 00:27:21.958 "data_size": 63488 00:27:21.958 }, 00:27:21.958 { 00:27:21.958 "name": "BaseBdev3", 00:27:21.958 "uuid": "d0bd00f2-e278-5aeb-8b88-04c01f2580e3", 00:27:21.958 "is_configured": true, 00:27:21.958 "data_offset": 2048, 00:27:21.958 "data_size": 63488 00:27:21.958 }, 00:27:21.958 { 00:27:21.958 "name": "BaseBdev4", 00:27:21.958 "uuid": "bfa15f05-1124-5f75-b5a6-c57d07be20b9", 00:27:21.958 "is_configured": true, 00:27:21.958 "data_offset": 2048, 00:27:21.958 "data_size": 63488 00:27:21.958 } 00:27:21.958 ] 00:27:21.958 }' 00:27:21.958 11:38:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:21.958 11:38:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.525 11:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:22.783 [2024-07-13 11:38:57.323973] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:22.783 [2024-07-13 11:38:57.324038] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:22.783 [2024-07-13 11:38:57.326669] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:22.783 [2024-07-13 11:38:57.326737] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.783 [2024-07-13 11:38:57.326836] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:22.783 [2024-07-13 11:38:57.326847] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:27:22.783 0 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 144537 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 144537 ']' 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 144537 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144537 00:27:22.783 killing process with pid 144537 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144537' 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 144537 00:27:22.783 11:38:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 144537 00:27:22.783 [2024-07-13 11:38:57.360486] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:23.041 [2024-07-13 11:38:57.582617] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.slPosypUkk 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:23.975 ************************************ 00:27:23.975 END TEST raid_write_error_test 00:27:23.975 ************************************ 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:23.975 00:27:23.975 real 0m8.266s 00:27:23.975 user 0m12.907s 00:27:23.975 sys 0m0.850s 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:23.975 11:38:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:23.975 11:38:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:23.975 11:38:58 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:27:23.975 11:38:58 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:27:23.975 11:38:58 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:27:23.975 11:38:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:23.975 11:38:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:23.975 11:38:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:23.975 ************************************ 00:27:23.975 START TEST raid_rebuild_test 00:27:23.975 ************************************ 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=144766 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 144766 /var/tmp/spdk-raid.sock 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 144766 ']' 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:23.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:23.975 11:38:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:23.975 [2024-07-13 11:38:58.683140] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:23.975 [2024-07-13 11:38:58.683534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144766 ] 00:27:23.976 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:23.976 Zero copy mechanism will not be used. 00:27:24.234 [2024-07-13 11:38:58.853010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.493 [2024-07-13 11:38:59.011206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.493 [2024-07-13 11:38:59.175761] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:25.059 11:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:25.059 11:38:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:27:25.059 11:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:25.059 11:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:25.317 BaseBdev1_malloc 00:27:25.317 11:38:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:25.575 [2024-07-13 11:39:00.098333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:25.575 [2024-07-13 11:39:00.098457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:25.575 [2024-07-13 11:39:00.098496] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:25.575 [2024-07-13 11:39:00.098515] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:25.575 [2024-07-13 11:39:00.100474] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:25.575 [2024-07-13 11:39:00.100521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:25.575 BaseBdev1 00:27:25.575 11:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:25.575 11:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:25.833 BaseBdev2_malloc 00:27:25.833 11:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:26.091 [2024-07-13 11:39:00.605187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:26.091 [2024-07-13 11:39:00.605280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:26.091 [2024-07-13 11:39:00.605316] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:27:26.091 [2024-07-13 11:39:00.605334] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:26.091 [2024-07-13 11:39:00.607535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:26.091 [2024-07-13 11:39:00.607588] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:26.091 BaseBdev2 00:27:26.091 11:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:26.349 spare_malloc 00:27:26.349 11:39:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:26.349 spare_delay 00:27:26.349 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:26.607 [2024-07-13 11:39:01.246322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:26.607 [2024-07-13 11:39:01.246449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:26.607 [2024-07-13 11:39:01.246486] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:26.607 [2024-07-13 11:39:01.246513] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:26.607 [2024-07-13 11:39:01.248507] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:26.607 [2024-07-13 11:39:01.248558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:26.607 spare 00:27:26.607 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:26.865 [2024-07-13 11:39:01.426400] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:26.865 [2024-07-13 11:39:01.428250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:26.865 [2024-07-13 11:39:01.428344] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:27:26.865 [2024-07-13 11:39:01.428357] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:26.865 [2024-07-13 11:39:01.428490] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:27:26.865 [2024-07-13 11:39:01.428880] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:27:26.865 [2024-07-13 11:39:01.428903] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:27:26.865 [2024-07-13 11:39:01.429050] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.865 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.123 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:27.123 "name": "raid_bdev1", 00:27:27.123 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:27.123 "strip_size_kb": 0, 00:27:27.123 "state": "online", 00:27:27.123 "raid_level": "raid1", 00:27:27.123 "superblock": false, 00:27:27.123 "num_base_bdevs": 2, 00:27:27.123 "num_base_bdevs_discovered": 2, 00:27:27.123 "num_base_bdevs_operational": 2, 00:27:27.123 "base_bdevs_list": [ 00:27:27.123 { 00:27:27.123 "name": "BaseBdev1", 00:27:27.123 "uuid": "cf0625a4-1ee5-511a-8834-feb778787843", 00:27:27.123 "is_configured": true, 00:27:27.123 "data_offset": 0, 00:27:27.123 "data_size": 65536 00:27:27.123 }, 00:27:27.123 { 00:27:27.123 "name": "BaseBdev2", 00:27:27.123 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:27.123 "is_configured": true, 00:27:27.123 "data_offset": 0, 00:27:27.123 "data_size": 65536 00:27:27.123 } 00:27:27.123 ] 00:27:27.123 }' 00:27:27.123 11:39:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:27.123 11:39:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.691 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:27.692 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:27.963 [2024-07-13 11:39:02.506842] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:27.963 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:27:27.963 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.963 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:28.244 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:28.244 [2024-07-13 11:39:02.950749] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:28.244 /dev/nbd0 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:28.517 1+0 records in 00:27:28.517 1+0 records out 00:27:28.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331898 s, 12.3 MB/s 00:27:28.517 11:39:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.517 11:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:28.517 11:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:28.518 11:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:28.518 11:39:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:28.518 11:39:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:28.518 11:39:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:28.518 11:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:27:28.518 11:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:27:28.518 11:39:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:27:33.782 65536+0 records in 00:27:33.782 65536+0 records out 00:27:33.782 33554432 bytes (34 MB, 32 MiB) copied, 5.0986 s, 6.6 MB/s 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:33.782 [2024-07-13 11:39:08.374134] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:33.782 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:34.041 [2024-07-13 11:39:08.701844] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.041 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.300 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.300 "name": "raid_bdev1", 00:27:34.300 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:34.300 "strip_size_kb": 0, 00:27:34.300 "state": "online", 00:27:34.300 "raid_level": "raid1", 00:27:34.300 "superblock": false, 00:27:34.300 "num_base_bdevs": 2, 00:27:34.300 "num_base_bdevs_discovered": 1, 00:27:34.300 "num_base_bdevs_operational": 1, 00:27:34.300 "base_bdevs_list": [ 00:27:34.300 { 00:27:34.300 "name": null, 00:27:34.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:34.300 "is_configured": false, 00:27:34.300 "data_offset": 0, 00:27:34.300 "data_size": 65536 00:27:34.300 }, 00:27:34.300 { 00:27:34.300 "name": "BaseBdev2", 00:27:34.300 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:34.300 "is_configured": true, 00:27:34.300 "data_offset": 0, 00:27:34.300 "data_size": 65536 00:27:34.300 } 00:27:34.300 ] 00:27:34.300 }' 00:27:34.300 11:39:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.300 11:39:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.866 11:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:35.124 [2024-07-13 11:39:09.750073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:35.124 [2024-07-13 11:39:09.762263] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b910 00:27:35.124 [2024-07-13 11:39:09.764122] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:35.124 11:39:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:36.060 11:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:36.060 11:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:36.060 11:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:36.060 11:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:36.060 11:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:36.060 11:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.060 11:39:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:36.328 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:36.328 "name": "raid_bdev1", 00:27:36.328 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:36.328 "strip_size_kb": 0, 00:27:36.328 "state": "online", 00:27:36.328 "raid_level": "raid1", 00:27:36.328 "superblock": false, 00:27:36.328 "num_base_bdevs": 2, 00:27:36.328 "num_base_bdevs_discovered": 2, 00:27:36.328 "num_base_bdevs_operational": 2, 00:27:36.328 "process": { 00:27:36.328 "type": "rebuild", 00:27:36.328 "target": "spare", 00:27:36.328 "progress": { 00:27:36.328 "blocks": 24576, 00:27:36.328 "percent": 37 00:27:36.328 } 00:27:36.328 }, 00:27:36.328 "base_bdevs_list": [ 00:27:36.328 { 00:27:36.328 "name": "spare", 00:27:36.328 "uuid": "4643cc94-8a4a-5b07-9d7b-1b3483e5efab", 00:27:36.328 "is_configured": true, 00:27:36.328 "data_offset": 0, 00:27:36.328 "data_size": 65536 00:27:36.328 }, 00:27:36.328 { 00:27:36.328 "name": "BaseBdev2", 00:27:36.328 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:36.328 "is_configured": true, 00:27:36.328 "data_offset": 0, 00:27:36.328 "data_size": 65536 00:27:36.328 } 00:27:36.328 ] 00:27:36.328 }' 00:27:36.328 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:36.586 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:36.586 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:36.586 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:36.586 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:36.845 [2024-07-13 11:39:11.358072] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:36.845 [2024-07-13 11:39:11.372932] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:36.845 [2024-07-13 11:39:11.373016] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:36.845 [2024-07-13 11:39:11.373034] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:36.845 [2024-07-13 11:39:11.373042] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:36.845 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.103 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:37.103 "name": "raid_bdev1", 00:27:37.103 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:37.103 "strip_size_kb": 0, 00:27:37.103 "state": "online", 00:27:37.103 "raid_level": "raid1", 00:27:37.103 "superblock": false, 00:27:37.103 "num_base_bdevs": 2, 00:27:37.103 "num_base_bdevs_discovered": 1, 00:27:37.103 "num_base_bdevs_operational": 1, 00:27:37.103 "base_bdevs_list": [ 00:27:37.103 { 00:27:37.103 "name": null, 00:27:37.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.103 "is_configured": false, 00:27:37.103 "data_offset": 0, 00:27:37.103 "data_size": 65536 00:27:37.103 }, 00:27:37.103 { 00:27:37.103 "name": "BaseBdev2", 00:27:37.103 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:37.103 "is_configured": true, 00:27:37.103 "data_offset": 0, 00:27:37.103 "data_size": 65536 00:27:37.103 } 00:27:37.103 ] 00:27:37.103 }' 00:27:37.103 11:39:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:37.103 11:39:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.670 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:37.670 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:37.670 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:37.670 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:37.670 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:37.670 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:37.670 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:37.929 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:37.929 "name": "raid_bdev1", 00:27:37.929 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:37.929 "strip_size_kb": 0, 00:27:37.929 "state": "online", 00:27:37.929 "raid_level": "raid1", 00:27:37.929 "superblock": false, 00:27:37.929 "num_base_bdevs": 2, 00:27:37.929 "num_base_bdevs_discovered": 1, 00:27:37.929 "num_base_bdevs_operational": 1, 00:27:37.929 "base_bdevs_list": [ 00:27:37.929 { 00:27:37.929 "name": null, 00:27:37.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.929 "is_configured": false, 00:27:37.929 "data_offset": 0, 00:27:37.929 "data_size": 65536 00:27:37.929 }, 00:27:37.929 { 00:27:37.929 "name": "BaseBdev2", 00:27:37.929 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:37.929 "is_configured": true, 00:27:37.929 "data_offset": 0, 00:27:37.929 "data_size": 65536 00:27:37.929 } 00:27:37.929 ] 00:27:37.929 }' 00:27:37.929 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:37.929 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:37.929 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:37.929 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:37.929 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:38.188 [2024-07-13 11:39:12.792707] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:38.188 [2024-07-13 11:39:12.804933] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0bab0 00:27:38.188 [2024-07-13 11:39:12.806989] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:38.188 11:39:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:39.121 11:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:39.121 11:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:39.121 11:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:39.121 11:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:39.121 11:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:39.121 11:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.121 11:39:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.379 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:39.379 "name": "raid_bdev1", 00:27:39.379 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:39.379 "strip_size_kb": 0, 00:27:39.379 "state": "online", 00:27:39.379 "raid_level": "raid1", 00:27:39.379 "superblock": false, 00:27:39.379 "num_base_bdevs": 2, 00:27:39.379 "num_base_bdevs_discovered": 2, 00:27:39.379 "num_base_bdevs_operational": 2, 00:27:39.379 "process": { 00:27:39.379 "type": "rebuild", 00:27:39.379 "target": "spare", 00:27:39.379 "progress": { 00:27:39.379 "blocks": 24576, 00:27:39.379 "percent": 37 00:27:39.379 } 00:27:39.379 }, 00:27:39.379 "base_bdevs_list": [ 00:27:39.379 { 00:27:39.379 "name": "spare", 00:27:39.379 "uuid": "4643cc94-8a4a-5b07-9d7b-1b3483e5efab", 00:27:39.379 "is_configured": true, 00:27:39.379 "data_offset": 0, 00:27:39.379 "data_size": 65536 00:27:39.379 }, 00:27:39.379 { 00:27:39.379 "name": "BaseBdev2", 00:27:39.379 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:39.379 "is_configured": true, 00:27:39.379 "data_offset": 0, 00:27:39.379 "data_size": 65536 00:27:39.379 } 00:27:39.379 ] 00:27:39.379 }' 00:27:39.379 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:39.379 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:39.379 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=805 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:39.636 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:39.894 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:39.894 "name": "raid_bdev1", 00:27:39.894 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:39.894 "strip_size_kb": 0, 00:27:39.894 "state": "online", 00:27:39.894 "raid_level": "raid1", 00:27:39.894 "superblock": false, 00:27:39.894 "num_base_bdevs": 2, 00:27:39.894 "num_base_bdevs_discovered": 2, 00:27:39.894 "num_base_bdevs_operational": 2, 00:27:39.894 "process": { 00:27:39.894 "type": "rebuild", 00:27:39.894 "target": "spare", 00:27:39.894 "progress": { 00:27:39.894 "blocks": 30720, 00:27:39.894 "percent": 46 00:27:39.894 } 00:27:39.894 }, 00:27:39.894 "base_bdevs_list": [ 00:27:39.894 { 00:27:39.894 "name": "spare", 00:27:39.894 "uuid": "4643cc94-8a4a-5b07-9d7b-1b3483e5efab", 00:27:39.894 "is_configured": true, 00:27:39.894 "data_offset": 0, 00:27:39.894 "data_size": 65536 00:27:39.894 }, 00:27:39.894 { 00:27:39.894 "name": "BaseBdev2", 00:27:39.894 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:39.894 "is_configured": true, 00:27:39.894 "data_offset": 0, 00:27:39.894 "data_size": 65536 00:27:39.894 } 00:27:39.894 ] 00:27:39.894 }' 00:27:39.894 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:39.894 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:39.894 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:39.894 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:39.894 11:39:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:40.829 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:40.829 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:40.829 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:40.829 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:40.829 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:40.829 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:40.829 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.829 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:41.088 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:41.088 "name": "raid_bdev1", 00:27:41.088 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:41.088 "strip_size_kb": 0, 00:27:41.088 "state": "online", 00:27:41.088 "raid_level": "raid1", 00:27:41.088 "superblock": false, 00:27:41.088 "num_base_bdevs": 2, 00:27:41.088 "num_base_bdevs_discovered": 2, 00:27:41.088 "num_base_bdevs_operational": 2, 00:27:41.088 "process": { 00:27:41.088 "type": "rebuild", 00:27:41.088 "target": "spare", 00:27:41.088 "progress": { 00:27:41.088 "blocks": 59392, 00:27:41.088 "percent": 90 00:27:41.088 } 00:27:41.088 }, 00:27:41.088 "base_bdevs_list": [ 00:27:41.088 { 00:27:41.088 "name": "spare", 00:27:41.088 "uuid": "4643cc94-8a4a-5b07-9d7b-1b3483e5efab", 00:27:41.088 "is_configured": true, 00:27:41.088 "data_offset": 0, 00:27:41.088 "data_size": 65536 00:27:41.088 }, 00:27:41.088 { 00:27:41.088 "name": "BaseBdev2", 00:27:41.088 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:41.088 "is_configured": true, 00:27:41.088 "data_offset": 0, 00:27:41.088 "data_size": 65536 00:27:41.088 } 00:27:41.088 ] 00:27:41.088 }' 00:27:41.088 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:41.088 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:41.088 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:41.347 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:41.347 11:39:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:41.347 [2024-07-13 11:39:16.024031] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:41.347 [2024-07-13 11:39:16.024102] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:41.347 [2024-07-13 11:39:16.024174] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:42.283 11:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:42.283 11:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.283 11:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.283 11:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:42.283 11:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:42.283 11:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.283 11:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.283 11:39:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.541 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.541 "name": "raid_bdev1", 00:27:42.541 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:42.541 "strip_size_kb": 0, 00:27:42.541 "state": "online", 00:27:42.541 "raid_level": "raid1", 00:27:42.541 "superblock": false, 00:27:42.541 "num_base_bdevs": 2, 00:27:42.541 "num_base_bdevs_discovered": 2, 00:27:42.541 "num_base_bdevs_operational": 2, 00:27:42.541 "base_bdevs_list": [ 00:27:42.541 { 00:27:42.542 "name": "spare", 00:27:42.542 "uuid": "4643cc94-8a4a-5b07-9d7b-1b3483e5efab", 00:27:42.542 "is_configured": true, 00:27:42.542 "data_offset": 0, 00:27:42.542 "data_size": 65536 00:27:42.542 }, 00:27:42.542 { 00:27:42.542 "name": "BaseBdev2", 00:27:42.542 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:42.542 "is_configured": true, 00:27:42.542 "data_offset": 0, 00:27:42.542 "data_size": 65536 00:27:42.542 } 00:27:42.542 ] 00:27:42.542 }' 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.542 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.801 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.801 "name": "raid_bdev1", 00:27:42.801 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:42.801 "strip_size_kb": 0, 00:27:42.801 "state": "online", 00:27:42.801 "raid_level": "raid1", 00:27:42.801 "superblock": false, 00:27:42.801 "num_base_bdevs": 2, 00:27:42.801 "num_base_bdevs_discovered": 2, 00:27:42.801 "num_base_bdevs_operational": 2, 00:27:42.801 "base_bdevs_list": [ 00:27:42.801 { 00:27:42.801 "name": "spare", 00:27:42.801 "uuid": "4643cc94-8a4a-5b07-9d7b-1b3483e5efab", 00:27:42.801 "is_configured": true, 00:27:42.801 "data_offset": 0, 00:27:42.801 "data_size": 65536 00:27:42.801 }, 00:27:42.801 { 00:27:42.801 "name": "BaseBdev2", 00:27:42.801 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:42.801 "is_configured": true, 00:27:42.801 "data_offset": 0, 00:27:42.801 "data_size": 65536 00:27:42.801 } 00:27:42.801 ] 00:27:42.801 }' 00:27:42.801 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.801 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:42.801 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:43.061 "name": "raid_bdev1", 00:27:43.061 "uuid": "75d3ad89-10ee-4d54-a6c1-98c65c516fdc", 00:27:43.061 "strip_size_kb": 0, 00:27:43.061 "state": "online", 00:27:43.061 "raid_level": "raid1", 00:27:43.061 "superblock": false, 00:27:43.061 "num_base_bdevs": 2, 00:27:43.061 "num_base_bdevs_discovered": 2, 00:27:43.061 "num_base_bdevs_operational": 2, 00:27:43.061 "base_bdevs_list": [ 00:27:43.061 { 00:27:43.061 "name": "spare", 00:27:43.061 "uuid": "4643cc94-8a4a-5b07-9d7b-1b3483e5efab", 00:27:43.061 "is_configured": true, 00:27:43.061 "data_offset": 0, 00:27:43.061 "data_size": 65536 00:27:43.061 }, 00:27:43.061 { 00:27:43.061 "name": "BaseBdev2", 00:27:43.061 "uuid": "0919576d-ae0f-5da7-8e97-54bac1fad82d", 00:27:43.061 "is_configured": true, 00:27:43.061 "data_offset": 0, 00:27:43.061 "data_size": 65536 00:27:43.061 } 00:27:43.061 ] 00:27:43.061 }' 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:43.061 11:39:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:43.997 11:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:43.997 [2024-07-13 11:39:18.712265] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:43.997 [2024-07-13 11:39:18.712299] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:43.997 [2024-07-13 11:39:18.712388] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:43.997 [2024-07-13 11:39:18.712462] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:43.997 [2024-07-13 11:39:18.712475] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:27:43.997 11:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:27:43.997 11:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:44.256 11:39:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:44.514 /dev/nbd0 00:27:44.514 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:44.772 1+0 records in 00:27:44.772 1+0 records out 00:27:44.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239637 s, 17.1 MB/s 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:44.772 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:45.030 /dev/nbd1 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:45.030 1+0 records in 00:27:45.030 1+0 records out 00:27:45.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030779 s, 13.3 MB/s 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:45.030 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:45.288 11:39:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:45.546 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:45.546 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:45.546 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:45.546 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:45.546 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.546 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:45.546 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:45.547 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:45.547 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:45.547 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 144766 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 144766 ']' 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 144766 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144766 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:45.804 killing process with pid 144766 00:27:45.804 Received shutdown signal, test time was about 60.000000 seconds 00:27:45.804 00:27:45.804 Latency(us) 00:27:45.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.804 =================================================================================================================== 00:27:45.804 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144766' 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 144766 00:27:45.804 11:39:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 144766 00:27:45.804 [2024-07-13 11:39:20.319290] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:45.804 [2024-07-13 11:39:20.508198] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:46.739 ************************************ 00:27:46.739 END TEST raid_rebuild_test 00:27:46.739 ************************************ 00:27:46.739 11:39:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:27:46.739 00:27:46.739 real 0m22.826s 00:27:46.739 user 0m31.693s 00:27:46.739 sys 0m3.926s 00:27:46.739 11:39:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:46.739 11:39:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.739 11:39:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:46.739 11:39:21 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:27:46.739 11:39:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:46.739 11:39:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:46.739 11:39:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:46.997 ************************************ 00:27:46.997 START TEST raid_rebuild_test_sb 00:27:46.997 ************************************ 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:46.997 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=145352 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 145352 /var/tmp/spdk-raid.sock 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 145352 ']' 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:46.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.998 11:39:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.998 [2024-07-13 11:39:21.579845] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:46.998 [2024-07-13 11:39:21.580283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145352 ] 00:27:46.998 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:46.998 Zero copy mechanism will not be used. 00:27:46.998 [2024-07-13 11:39:21.745267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.256 [2024-07-13 11:39:21.907928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.514 [2024-07-13 11:39:22.075443] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:47.772 11:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:47.772 11:39:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:27:47.772 11:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:47.772 11:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:48.029 BaseBdev1_malloc 00:27:48.285 11:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:48.285 [2024-07-13 11:39:22.967037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:48.285 [2024-07-13 11:39:22.967319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.285 [2024-07-13 11:39:22.967464] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:48.285 [2024-07-13 11:39:22.967584] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.285 [2024-07-13 11:39:22.969538] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.285 [2024-07-13 11:39:22.969719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:48.285 BaseBdev1 00:27:48.285 11:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:48.285 11:39:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:48.542 BaseBdev2_malloc 00:27:48.542 11:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:48.798 [2024-07-13 11:39:23.435925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:48.798 [2024-07-13 11:39:23.436188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.798 [2024-07-13 11:39:23.436262] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:27:48.799 [2024-07-13 11:39:23.436513] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.799 [2024-07-13 11:39:23.438495] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.799 [2024-07-13 11:39:23.438661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:48.799 BaseBdev2 00:27:48.799 11:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:49.059 spare_malloc 00:27:49.059 11:39:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:49.316 spare_delay 00:27:49.316 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:49.573 [2024-07-13 11:39:24.180613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:49.573 [2024-07-13 11:39:24.180835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:49.573 [2024-07-13 11:39:24.180971] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:49.573 [2024-07-13 11:39:24.181084] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:49.573 [2024-07-13 11:39:24.183018] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:49.573 [2024-07-13 11:39:24.183220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:49.573 spare 00:27:49.573 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:49.831 [2024-07-13 11:39:24.416727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:49.831 [2024-07-13 11:39:24.418392] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:49.831 [2024-07-13 11:39:24.418708] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:27:49.831 [2024-07-13 11:39:24.418865] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:49.831 [2024-07-13 11:39:24.419014] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:27:49.831 [2024-07-13 11:39:24.419471] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:27:49.831 [2024-07-13 11:39:24.419594] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:27:49.831 [2024-07-13 11:39:24.419812] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.831 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.089 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:50.089 "name": "raid_bdev1", 00:27:50.089 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:27:50.089 "strip_size_kb": 0, 00:27:50.089 "state": "online", 00:27:50.089 "raid_level": "raid1", 00:27:50.089 "superblock": true, 00:27:50.089 "num_base_bdevs": 2, 00:27:50.089 "num_base_bdevs_discovered": 2, 00:27:50.089 "num_base_bdevs_operational": 2, 00:27:50.089 "base_bdevs_list": [ 00:27:50.089 { 00:27:50.089 "name": "BaseBdev1", 00:27:50.089 "uuid": "7f012912-e012-5d1a-b23c-3e4389150696", 00:27:50.089 "is_configured": true, 00:27:50.089 "data_offset": 2048, 00:27:50.089 "data_size": 63488 00:27:50.089 }, 00:27:50.089 { 00:27:50.089 "name": "BaseBdev2", 00:27:50.089 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:27:50.089 "is_configured": true, 00:27:50.089 "data_offset": 2048, 00:27:50.089 "data_size": 63488 00:27:50.089 } 00:27:50.089 ] 00:27:50.089 }' 00:27:50.089 11:39:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:50.089 11:39:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:50.655 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:50.655 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:50.914 [2024-07-13 11:39:25.489094] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:50.914 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:27:50.914 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:50.914 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:51.172 11:39:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:51.431 [2024-07-13 11:39:25.969028] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:51.431 /dev/nbd0 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:51.431 1+0 records in 00:27:51.431 1+0 records out 00:27:51.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401685 s, 10.2 MB/s 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:27:51.431 11:39:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:27:56.699 63488+0 records in 00:27:56.699 63488+0 records out 00:27:56.699 32505856 bytes (33 MB, 31 MiB) copied, 4.76982 s, 6.8 MB/s 00:27:56.699 11:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:56.699 11:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:56.699 11:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:27:56.699 11:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:56.699 11:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:56.699 11:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.699 11:39:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:27:56.699 [2024-07-13 11:39:31.067072] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:56.699 [2024-07-13 11:39:31.358680] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.699 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.958 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.958 "name": "raid_bdev1", 00:27:56.958 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:27:56.958 "strip_size_kb": 0, 00:27:56.958 "state": "online", 00:27:56.958 "raid_level": "raid1", 00:27:56.958 "superblock": true, 00:27:56.958 "num_base_bdevs": 2, 00:27:56.958 "num_base_bdevs_discovered": 1, 00:27:56.958 "num_base_bdevs_operational": 1, 00:27:56.958 "base_bdevs_list": [ 00:27:56.958 { 00:27:56.958 "name": null, 00:27:56.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:56.958 "is_configured": false, 00:27:56.958 "data_offset": 2048, 00:27:56.958 "data_size": 63488 00:27:56.958 }, 00:27:56.958 { 00:27:56.958 "name": "BaseBdev2", 00:27:56.958 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:27:56.958 "is_configured": true, 00:27:56.958 "data_offset": 2048, 00:27:56.958 "data_size": 63488 00:27:56.958 } 00:27:56.958 ] 00:27:56.958 }' 00:27:56.958 11:39:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.958 11:39:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:57.527 11:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:57.784 [2024-07-13 11:39:32.378873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:57.784 [2024-07-13 11:39:32.391703] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca50a0 00:27:57.784 [2024-07-13 11:39:32.393728] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:57.784 11:39:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:58.716 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:58.717 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:58.717 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:58.717 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:58.717 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:58.717 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.717 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.975 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:58.975 "name": "raid_bdev1", 00:27:58.975 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:27:58.975 "strip_size_kb": 0, 00:27:58.975 "state": "online", 00:27:58.975 "raid_level": "raid1", 00:27:58.975 "superblock": true, 00:27:58.975 "num_base_bdevs": 2, 00:27:58.975 "num_base_bdevs_discovered": 2, 00:27:58.975 "num_base_bdevs_operational": 2, 00:27:58.975 "process": { 00:27:58.975 "type": "rebuild", 00:27:58.975 "target": "spare", 00:27:58.975 "progress": { 00:27:58.975 "blocks": 24576, 00:27:58.975 "percent": 38 00:27:58.975 } 00:27:58.975 }, 00:27:58.975 "base_bdevs_list": [ 00:27:58.975 { 00:27:58.975 "name": "spare", 00:27:58.975 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:27:58.975 "is_configured": true, 00:27:58.975 "data_offset": 2048, 00:27:58.975 "data_size": 63488 00:27:58.975 }, 00:27:58.975 { 00:27:58.975 "name": "BaseBdev2", 00:27:58.975 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:27:58.975 "is_configured": true, 00:27:58.975 "data_offset": 2048, 00:27:58.975 "data_size": 63488 00:27:58.975 } 00:27:58.975 ] 00:27:58.975 }' 00:27:58.975 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:58.975 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:58.975 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:59.233 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:59.233 11:39:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:59.491 [2024-07-13 11:39:34.004074] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:59.491 [2024-07-13 11:39:34.104921] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:59.491 [2024-07-13 11:39:34.105121] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:59.491 [2024-07-13 11:39:34.105242] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:59.491 [2024-07-13 11:39:34.105281] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:59.491 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:59.492 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.492 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.750 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:59.750 "name": "raid_bdev1", 00:27:59.750 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:27:59.750 "strip_size_kb": 0, 00:27:59.750 "state": "online", 00:27:59.750 "raid_level": "raid1", 00:27:59.750 "superblock": true, 00:27:59.750 "num_base_bdevs": 2, 00:27:59.750 "num_base_bdevs_discovered": 1, 00:27:59.750 "num_base_bdevs_operational": 1, 00:27:59.750 "base_bdevs_list": [ 00:27:59.750 { 00:27:59.750 "name": null, 00:27:59.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:59.750 "is_configured": false, 00:27:59.750 "data_offset": 2048, 00:27:59.750 "data_size": 63488 00:27:59.750 }, 00:27:59.750 { 00:27:59.750 "name": "BaseBdev2", 00:27:59.750 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:27:59.750 "is_configured": true, 00:27:59.750 "data_offset": 2048, 00:27:59.750 "data_size": 63488 00:27:59.750 } 00:27:59.750 ] 00:27:59.750 }' 00:27:59.750 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:59.750 11:39:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.317 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:00.317 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:00.317 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:00.317 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:00.317 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:00.317 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.317 11:39:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.575 11:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:00.575 "name": "raid_bdev1", 00:28:00.575 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:00.575 "strip_size_kb": 0, 00:28:00.575 "state": "online", 00:28:00.575 "raid_level": "raid1", 00:28:00.575 "superblock": true, 00:28:00.575 "num_base_bdevs": 2, 00:28:00.575 "num_base_bdevs_discovered": 1, 00:28:00.575 "num_base_bdevs_operational": 1, 00:28:00.575 "base_bdevs_list": [ 00:28:00.575 { 00:28:00.575 "name": null, 00:28:00.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.575 "is_configured": false, 00:28:00.575 "data_offset": 2048, 00:28:00.575 "data_size": 63488 00:28:00.575 }, 00:28:00.575 { 00:28:00.575 "name": "BaseBdev2", 00:28:00.575 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:00.575 "is_configured": true, 00:28:00.575 "data_offset": 2048, 00:28:00.575 "data_size": 63488 00:28:00.575 } 00:28:00.575 ] 00:28:00.575 }' 00:28:00.575 11:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:00.575 11:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:00.575 11:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:00.833 11:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:00.833 11:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:00.834 [2024-07-13 11:39:35.539126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:00.834 [2024-07-13 11:39:35.551079] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5240 00:28:00.834 [2024-07-13 11:39:35.552961] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:00.834 11:39:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:02.208 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:02.208 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.209 "name": "raid_bdev1", 00:28:02.209 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:02.209 "strip_size_kb": 0, 00:28:02.209 "state": "online", 00:28:02.209 "raid_level": "raid1", 00:28:02.209 "superblock": true, 00:28:02.209 "num_base_bdevs": 2, 00:28:02.209 "num_base_bdevs_discovered": 2, 00:28:02.209 "num_base_bdevs_operational": 2, 00:28:02.209 "process": { 00:28:02.209 "type": "rebuild", 00:28:02.209 "target": "spare", 00:28:02.209 "progress": { 00:28:02.209 "blocks": 24576, 00:28:02.209 "percent": 38 00:28:02.209 } 00:28:02.209 }, 00:28:02.209 "base_bdevs_list": [ 00:28:02.209 { 00:28:02.209 "name": "spare", 00:28:02.209 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:02.209 "is_configured": true, 00:28:02.209 "data_offset": 2048, 00:28:02.209 "data_size": 63488 00:28:02.209 }, 00:28:02.209 { 00:28:02.209 "name": "BaseBdev2", 00:28:02.209 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:02.209 "is_configured": true, 00:28:02.209 "data_offset": 2048, 00:28:02.209 "data_size": 63488 00:28:02.209 } 00:28:02.209 ] 00:28:02.209 }' 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:02.209 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=827 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.209 11:39:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.467 11:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.467 "name": "raid_bdev1", 00:28:02.467 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:02.467 "strip_size_kb": 0, 00:28:02.467 "state": "online", 00:28:02.467 "raid_level": "raid1", 00:28:02.467 "superblock": true, 00:28:02.467 "num_base_bdevs": 2, 00:28:02.467 "num_base_bdevs_discovered": 2, 00:28:02.467 "num_base_bdevs_operational": 2, 00:28:02.467 "process": { 00:28:02.467 "type": "rebuild", 00:28:02.467 "target": "spare", 00:28:02.467 "progress": { 00:28:02.467 "blocks": 32768, 00:28:02.467 "percent": 51 00:28:02.467 } 00:28:02.467 }, 00:28:02.467 "base_bdevs_list": [ 00:28:02.467 { 00:28:02.467 "name": "spare", 00:28:02.467 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:02.467 "is_configured": true, 00:28:02.467 "data_offset": 2048, 00:28:02.467 "data_size": 63488 00:28:02.467 }, 00:28:02.467 { 00:28:02.467 "name": "BaseBdev2", 00:28:02.467 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:02.467 "is_configured": true, 00:28:02.467 "data_offset": 2048, 00:28:02.467 "data_size": 63488 00:28:02.467 } 00:28:02.467 ] 00:28:02.467 }' 00:28:02.467 11:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.725 11:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:02.725 11:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.725 11:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:02.725 11:39:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:03.733 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:03.733 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:03.733 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:03.733 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:03.734 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:03.734 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:03.734 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.734 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.005 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:04.005 "name": "raid_bdev1", 00:28:04.005 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:04.005 "strip_size_kb": 0, 00:28:04.005 "state": "online", 00:28:04.005 "raid_level": "raid1", 00:28:04.005 "superblock": true, 00:28:04.005 "num_base_bdevs": 2, 00:28:04.005 "num_base_bdevs_discovered": 2, 00:28:04.005 "num_base_bdevs_operational": 2, 00:28:04.005 "process": { 00:28:04.005 "type": "rebuild", 00:28:04.005 "target": "spare", 00:28:04.005 "progress": { 00:28:04.005 "blocks": 59392, 00:28:04.005 "percent": 93 00:28:04.005 } 00:28:04.005 }, 00:28:04.005 "base_bdevs_list": [ 00:28:04.005 { 00:28:04.005 "name": "spare", 00:28:04.005 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:04.005 "is_configured": true, 00:28:04.005 "data_offset": 2048, 00:28:04.005 "data_size": 63488 00:28:04.005 }, 00:28:04.005 { 00:28:04.005 "name": "BaseBdev2", 00:28:04.005 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:04.005 "is_configured": true, 00:28:04.005 "data_offset": 2048, 00:28:04.005 "data_size": 63488 00:28:04.005 } 00:28:04.005 ] 00:28:04.005 }' 00:28:04.005 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:04.005 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:04.005 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:04.005 [2024-07-13 11:39:38.673085] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:04.005 [2024-07-13 11:39:38.673285] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:04.005 [2024-07-13 11:39:38.673509] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.005 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:04.005 11:39:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:04.940 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:04.940 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:04.940 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:04.940 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:04.940 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:04.940 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:05.198 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.198 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.198 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:05.198 "name": "raid_bdev1", 00:28:05.198 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:05.198 "strip_size_kb": 0, 00:28:05.198 "state": "online", 00:28:05.198 "raid_level": "raid1", 00:28:05.198 "superblock": true, 00:28:05.198 "num_base_bdevs": 2, 00:28:05.198 "num_base_bdevs_discovered": 2, 00:28:05.198 "num_base_bdevs_operational": 2, 00:28:05.198 "base_bdevs_list": [ 00:28:05.199 { 00:28:05.199 "name": "spare", 00:28:05.199 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:05.199 "is_configured": true, 00:28:05.199 "data_offset": 2048, 00:28:05.199 "data_size": 63488 00:28:05.199 }, 00:28:05.199 { 00:28:05.199 "name": "BaseBdev2", 00:28:05.199 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:05.199 "is_configured": true, 00:28:05.199 "data_offset": 2048, 00:28:05.199 "data_size": 63488 00:28:05.199 } 00:28:05.199 ] 00:28:05.199 }' 00:28:05.199 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:05.458 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:05.458 11:39:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:05.458 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:05.458 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:28:05.458 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:05.458 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:05.458 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:05.458 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:05.458 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:05.458 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.458 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:05.717 "name": "raid_bdev1", 00:28:05.717 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:05.717 "strip_size_kb": 0, 00:28:05.717 "state": "online", 00:28:05.717 "raid_level": "raid1", 00:28:05.717 "superblock": true, 00:28:05.717 "num_base_bdevs": 2, 00:28:05.717 "num_base_bdevs_discovered": 2, 00:28:05.717 "num_base_bdevs_operational": 2, 00:28:05.717 "base_bdevs_list": [ 00:28:05.717 { 00:28:05.717 "name": "spare", 00:28:05.717 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:05.717 "is_configured": true, 00:28:05.717 "data_offset": 2048, 00:28:05.717 "data_size": 63488 00:28:05.717 }, 00:28:05.717 { 00:28:05.717 "name": "BaseBdev2", 00:28:05.717 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:05.717 "is_configured": true, 00:28:05.717 "data_offset": 2048, 00:28:05.717 "data_size": 63488 00:28:05.717 } 00:28:05.717 ] 00:28:05.717 }' 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:05.717 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:05.718 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:05.718 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.718 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.976 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:05.976 "name": "raid_bdev1", 00:28:05.976 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:05.976 "strip_size_kb": 0, 00:28:05.976 "state": "online", 00:28:05.976 "raid_level": "raid1", 00:28:05.976 "superblock": true, 00:28:05.976 "num_base_bdevs": 2, 00:28:05.976 "num_base_bdevs_discovered": 2, 00:28:05.976 "num_base_bdevs_operational": 2, 00:28:05.976 "base_bdevs_list": [ 00:28:05.976 { 00:28:05.976 "name": "spare", 00:28:05.976 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:05.976 "is_configured": true, 00:28:05.976 "data_offset": 2048, 00:28:05.976 "data_size": 63488 00:28:05.976 }, 00:28:05.976 { 00:28:05.976 "name": "BaseBdev2", 00:28:05.976 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:05.976 "is_configured": true, 00:28:05.976 "data_offset": 2048, 00:28:05.976 "data_size": 63488 00:28:05.976 } 00:28:05.976 ] 00:28:05.976 }' 00:28:05.976 11:39:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:05.976 11:39:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:06.912 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:06.912 [2024-07-13 11:39:41.538477] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:06.912 [2024-07-13 11:39:41.538624] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:06.912 [2024-07-13 11:39:41.538805] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:06.912 [2024-07-13 11:39:41.538995] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:06.912 [2024-07-13 11:39:41.539169] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:28:06.912 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.912 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:07.170 11:39:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:07.428 /dev/nbd0 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:07.428 1+0 records in 00:28:07.428 1+0 records out 00:28:07.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724362 s, 5.7 MB/s 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:07.428 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:07.686 /dev/nbd1 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:07.686 1+0 records in 00:28:07.686 1+0 records out 00:28:07.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481541 s, 8.5 MB/s 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:07.686 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:07.944 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:07.944 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:07.944 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:07.944 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:07.944 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:07.944 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:07.944 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:08.202 11:39:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:08.459 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:08.715 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:08.973 [2024-07-13 11:39:43.644017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:08.973 [2024-07-13 11:39:43.644087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.973 [2024-07-13 11:39:43.644147] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:08.973 [2024-07-13 11:39:43.644169] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.973 [2024-07-13 11:39:43.646437] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.973 [2024-07-13 11:39:43.646483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:08.973 [2024-07-13 11:39:43.646586] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:08.973 [2024-07-13 11:39:43.646668] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:08.973 [2024-07-13 11:39:43.646826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:08.973 spare 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.973 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.232 [2024-07-13 11:39:43.746932] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:28:09.232 [2024-07-13 11:39:43.746959] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:09.232 [2024-07-13 11:39:43.747102] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5be0 00:28:09.232 [2024-07-13 11:39:43.747475] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:28:09.232 [2024-07-13 11:39:43.747490] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:28:09.232 [2024-07-13 11:39:43.747608] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.232 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:09.232 "name": "raid_bdev1", 00:28:09.232 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:09.232 "strip_size_kb": 0, 00:28:09.232 "state": "online", 00:28:09.232 "raid_level": "raid1", 00:28:09.232 "superblock": true, 00:28:09.232 "num_base_bdevs": 2, 00:28:09.232 "num_base_bdevs_discovered": 2, 00:28:09.232 "num_base_bdevs_operational": 2, 00:28:09.232 "base_bdevs_list": [ 00:28:09.232 { 00:28:09.232 "name": "spare", 00:28:09.232 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:09.232 "is_configured": true, 00:28:09.232 "data_offset": 2048, 00:28:09.232 "data_size": 63488 00:28:09.232 }, 00:28:09.232 { 00:28:09.232 "name": "BaseBdev2", 00:28:09.232 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:09.232 "is_configured": true, 00:28:09.232 "data_offset": 2048, 00:28:09.232 "data_size": 63488 00:28:09.232 } 00:28:09.232 ] 00:28:09.232 }' 00:28:09.232 11:39:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:09.232 11:39:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:09.798 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:09.798 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:09.798 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:09.798 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:09.798 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:09.798 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.799 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.057 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:10.057 "name": "raid_bdev1", 00:28:10.057 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:10.057 "strip_size_kb": 0, 00:28:10.057 "state": "online", 00:28:10.057 "raid_level": "raid1", 00:28:10.057 "superblock": true, 00:28:10.057 "num_base_bdevs": 2, 00:28:10.057 "num_base_bdevs_discovered": 2, 00:28:10.057 "num_base_bdevs_operational": 2, 00:28:10.057 "base_bdevs_list": [ 00:28:10.057 { 00:28:10.057 "name": "spare", 00:28:10.057 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:10.057 "is_configured": true, 00:28:10.057 "data_offset": 2048, 00:28:10.057 "data_size": 63488 00:28:10.057 }, 00:28:10.057 { 00:28:10.057 "name": "BaseBdev2", 00:28:10.057 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:10.057 "is_configured": true, 00:28:10.057 "data_offset": 2048, 00:28:10.057 "data_size": 63488 00:28:10.057 } 00:28:10.057 ] 00:28:10.057 }' 00:28:10.057 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:10.057 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:10.057 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:10.057 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:10.057 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.057 11:39:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:10.315 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:28:10.315 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:10.573 [2024-07-13 11:39:45.252396] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.573 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.831 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:10.831 "name": "raid_bdev1", 00:28:10.831 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:10.831 "strip_size_kb": 0, 00:28:10.831 "state": "online", 00:28:10.831 "raid_level": "raid1", 00:28:10.831 "superblock": true, 00:28:10.831 "num_base_bdevs": 2, 00:28:10.831 "num_base_bdevs_discovered": 1, 00:28:10.831 "num_base_bdevs_operational": 1, 00:28:10.831 "base_bdevs_list": [ 00:28:10.831 { 00:28:10.831 "name": null, 00:28:10.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.831 "is_configured": false, 00:28:10.831 "data_offset": 2048, 00:28:10.831 "data_size": 63488 00:28:10.831 }, 00:28:10.831 { 00:28:10.831 "name": "BaseBdev2", 00:28:10.831 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:10.831 "is_configured": true, 00:28:10.831 "data_offset": 2048, 00:28:10.831 "data_size": 63488 00:28:10.831 } 00:28:10.831 ] 00:28:10.831 }' 00:28:10.831 11:39:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:10.831 11:39:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.397 11:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:11.655 [2024-07-13 11:39:46.320599] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:11.655 [2024-07-13 11:39:46.320731] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:11.655 [2024-07-13 11:39:46.320745] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:11.655 [2024-07-13 11:39:46.320789] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:11.655 [2024-07-13 11:39:46.332991] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5d80 00:28:11.655 [2024-07-13 11:39:46.334937] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:11.655 11:39:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:13.030 "name": "raid_bdev1", 00:28:13.030 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:13.030 "strip_size_kb": 0, 00:28:13.030 "state": "online", 00:28:13.030 "raid_level": "raid1", 00:28:13.030 "superblock": true, 00:28:13.030 "num_base_bdevs": 2, 00:28:13.030 "num_base_bdevs_discovered": 2, 00:28:13.030 "num_base_bdevs_operational": 2, 00:28:13.030 "process": { 00:28:13.030 "type": "rebuild", 00:28:13.030 "target": "spare", 00:28:13.030 "progress": { 00:28:13.030 "blocks": 24576, 00:28:13.030 "percent": 38 00:28:13.030 } 00:28:13.030 }, 00:28:13.030 "base_bdevs_list": [ 00:28:13.030 { 00:28:13.030 "name": "spare", 00:28:13.030 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:13.030 "is_configured": true, 00:28:13.030 "data_offset": 2048, 00:28:13.030 "data_size": 63488 00:28:13.030 }, 00:28:13.030 { 00:28:13.030 "name": "BaseBdev2", 00:28:13.030 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:13.030 "is_configured": true, 00:28:13.030 "data_offset": 2048, 00:28:13.030 "data_size": 63488 00:28:13.030 } 00:28:13.030 ] 00:28:13.030 }' 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:13.030 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:13.289 [2024-07-13 11:39:47.888976] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:13.289 [2024-07-13 11:39:47.944981] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:13.289 [2024-07-13 11:39:47.945059] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:13.289 [2024-07-13 11:39:47.945078] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:13.289 [2024-07-13 11:39:47.945086] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.289 11:39:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:13.548 11:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:13.548 "name": "raid_bdev1", 00:28:13.548 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:13.548 "strip_size_kb": 0, 00:28:13.548 "state": "online", 00:28:13.548 "raid_level": "raid1", 00:28:13.548 "superblock": true, 00:28:13.548 "num_base_bdevs": 2, 00:28:13.548 "num_base_bdevs_discovered": 1, 00:28:13.548 "num_base_bdevs_operational": 1, 00:28:13.548 "base_bdevs_list": [ 00:28:13.548 { 00:28:13.548 "name": null, 00:28:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.548 "is_configured": false, 00:28:13.548 "data_offset": 2048, 00:28:13.548 "data_size": 63488 00:28:13.548 }, 00:28:13.548 { 00:28:13.548 "name": "BaseBdev2", 00:28:13.548 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:13.548 "is_configured": true, 00:28:13.548 "data_offset": 2048, 00:28:13.548 "data_size": 63488 00:28:13.548 } 00:28:13.548 ] 00:28:13.548 }' 00:28:13.548 11:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:13.548 11:39:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.115 11:39:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:14.373 [2024-07-13 11:39:49.071926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:14.373 [2024-07-13 11:39:49.071992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:14.373 [2024-07-13 11:39:49.072031] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:28:14.373 [2024-07-13 11:39:49.072060] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:14.374 [2024-07-13 11:39:49.072594] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:14.374 [2024-07-13 11:39:49.072638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:14.374 [2024-07-13 11:39:49.072737] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:14.374 [2024-07-13 11:39:49.072752] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:14.374 [2024-07-13 11:39:49.072760] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:14.374 [2024-07-13 11:39:49.072802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:14.374 [2024-07-13 11:39:49.082534] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc60c0 00:28:14.374 spare 00:28:14.374 [2024-07-13 11:39:49.084460] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:14.374 11:39:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:15.749 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:15.750 "name": "raid_bdev1", 00:28:15.750 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:15.750 "strip_size_kb": 0, 00:28:15.750 "state": "online", 00:28:15.750 "raid_level": "raid1", 00:28:15.750 "superblock": true, 00:28:15.750 "num_base_bdevs": 2, 00:28:15.750 "num_base_bdevs_discovered": 2, 00:28:15.750 "num_base_bdevs_operational": 2, 00:28:15.750 "process": { 00:28:15.750 "type": "rebuild", 00:28:15.750 "target": "spare", 00:28:15.750 "progress": { 00:28:15.750 "blocks": 24576, 00:28:15.750 "percent": 38 00:28:15.750 } 00:28:15.750 }, 00:28:15.750 "base_bdevs_list": [ 00:28:15.750 { 00:28:15.750 "name": "spare", 00:28:15.750 "uuid": "84a42e4f-51eb-5be9-a8c6-827f3717baa0", 00:28:15.750 "is_configured": true, 00:28:15.750 "data_offset": 2048, 00:28:15.750 "data_size": 63488 00:28:15.750 }, 00:28:15.750 { 00:28:15.750 "name": "BaseBdev2", 00:28:15.750 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:15.750 "is_configured": true, 00:28:15.750 "data_offset": 2048, 00:28:15.750 "data_size": 63488 00:28:15.750 } 00:28:15.750 ] 00:28:15.750 }' 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:15.750 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:16.008 [2024-07-13 11:39:50.690935] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:16.008 [2024-07-13 11:39:50.693513] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:16.008 [2024-07-13 11:39:50.693585] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.008 [2024-07-13 11:39:50.693603] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:16.008 [2024-07-13 11:39:50.693611] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.008 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.267 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.267 "name": "raid_bdev1", 00:28:16.267 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:16.267 "strip_size_kb": 0, 00:28:16.267 "state": "online", 00:28:16.267 "raid_level": "raid1", 00:28:16.267 "superblock": true, 00:28:16.267 "num_base_bdevs": 2, 00:28:16.267 "num_base_bdevs_discovered": 1, 00:28:16.267 "num_base_bdevs_operational": 1, 00:28:16.267 "base_bdevs_list": [ 00:28:16.267 { 00:28:16.267 "name": null, 00:28:16.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.267 "is_configured": false, 00:28:16.267 "data_offset": 2048, 00:28:16.267 "data_size": 63488 00:28:16.267 }, 00:28:16.267 { 00:28:16.267 "name": "BaseBdev2", 00:28:16.267 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:16.267 "is_configured": true, 00:28:16.267 "data_offset": 2048, 00:28:16.267 "data_size": 63488 00:28:16.267 } 00:28:16.267 ] 00:28:16.267 }' 00:28:16.267 11:39:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.267 11:39:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.201 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:17.201 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:17.201 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:17.201 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:17.201 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:17.201 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.201 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.201 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:17.201 "name": "raid_bdev1", 00:28:17.201 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:17.202 "strip_size_kb": 0, 00:28:17.202 "state": "online", 00:28:17.202 "raid_level": "raid1", 00:28:17.202 "superblock": true, 00:28:17.202 "num_base_bdevs": 2, 00:28:17.202 "num_base_bdevs_discovered": 1, 00:28:17.202 "num_base_bdevs_operational": 1, 00:28:17.202 "base_bdevs_list": [ 00:28:17.202 { 00:28:17.202 "name": null, 00:28:17.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:17.202 "is_configured": false, 00:28:17.202 "data_offset": 2048, 00:28:17.202 "data_size": 63488 00:28:17.202 }, 00:28:17.202 { 00:28:17.202 "name": "BaseBdev2", 00:28:17.202 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:17.202 "is_configured": true, 00:28:17.202 "data_offset": 2048, 00:28:17.202 "data_size": 63488 00:28:17.202 } 00:28:17.202 ] 00:28:17.202 }' 00:28:17.202 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:17.202 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:17.202 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:17.202 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:17.202 11:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:17.460 11:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:17.718 [2024-07-13 11:39:52.427978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:17.718 [2024-07-13 11:39:52.428044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:17.718 [2024-07-13 11:39:52.428090] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:17.718 [2024-07-13 11:39:52.428115] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:17.718 [2024-07-13 11:39:52.428572] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:17.718 [2024-07-13 11:39:52.428612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:17.718 [2024-07-13 11:39:52.428717] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:17.718 [2024-07-13 11:39:52.428734] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:17.718 [2024-07-13 11:39:52.428741] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:17.718 BaseBdev1 00:28:17.718 11:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:19.095 "name": "raid_bdev1", 00:28:19.095 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:19.095 "strip_size_kb": 0, 00:28:19.095 "state": "online", 00:28:19.095 "raid_level": "raid1", 00:28:19.095 "superblock": true, 00:28:19.095 "num_base_bdevs": 2, 00:28:19.095 "num_base_bdevs_discovered": 1, 00:28:19.095 "num_base_bdevs_operational": 1, 00:28:19.095 "base_bdevs_list": [ 00:28:19.095 { 00:28:19.095 "name": null, 00:28:19.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.095 "is_configured": false, 00:28:19.095 "data_offset": 2048, 00:28:19.095 "data_size": 63488 00:28:19.095 }, 00:28:19.095 { 00:28:19.095 "name": "BaseBdev2", 00:28:19.095 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:19.095 "is_configured": true, 00:28:19.095 "data_offset": 2048, 00:28:19.095 "data_size": 63488 00:28:19.095 } 00:28:19.095 ] 00:28:19.095 }' 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:19.095 11:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.661 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:19.661 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:19.661 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:19.661 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:19.661 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:19.661 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.661 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:19.920 "name": "raid_bdev1", 00:28:19.920 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:19.920 "strip_size_kb": 0, 00:28:19.920 "state": "online", 00:28:19.920 "raid_level": "raid1", 00:28:19.920 "superblock": true, 00:28:19.920 "num_base_bdevs": 2, 00:28:19.920 "num_base_bdevs_discovered": 1, 00:28:19.920 "num_base_bdevs_operational": 1, 00:28:19.920 "base_bdevs_list": [ 00:28:19.920 { 00:28:19.920 "name": null, 00:28:19.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:19.920 "is_configured": false, 00:28:19.920 "data_offset": 2048, 00:28:19.920 "data_size": 63488 00:28:19.920 }, 00:28:19.920 { 00:28:19.920 "name": "BaseBdev2", 00:28:19.920 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:19.920 "is_configured": true, 00:28:19.920 "data_offset": 2048, 00:28:19.920 "data_size": 63488 00:28:19.920 } 00:28:19.920 ] 00:28:19.920 }' 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:19.920 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:20.178 [2024-07-13 11:39:54.868732] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:20.178 [2024-07-13 11:39:54.868834] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:20.178 [2024-07-13 11:39:54.868848] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:20.178 request: 00:28:20.178 { 00:28:20.179 "base_bdev": "BaseBdev1", 00:28:20.179 "raid_bdev": "raid_bdev1", 00:28:20.179 "method": "bdev_raid_add_base_bdev", 00:28:20.179 "req_id": 1 00:28:20.179 } 00:28:20.179 Got JSON-RPC error response 00:28:20.179 response: 00:28:20.179 { 00:28:20.179 "code": -22, 00:28:20.179 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:20.179 } 00:28:20.179 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:28:20.179 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:20.179 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:20.179 11:39:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:20.179 11:39:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.554 11:39:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.554 11:39:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:21.554 "name": "raid_bdev1", 00:28:21.554 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:21.554 "strip_size_kb": 0, 00:28:21.554 "state": "online", 00:28:21.554 "raid_level": "raid1", 00:28:21.554 "superblock": true, 00:28:21.554 "num_base_bdevs": 2, 00:28:21.554 "num_base_bdevs_discovered": 1, 00:28:21.554 "num_base_bdevs_operational": 1, 00:28:21.554 "base_bdevs_list": [ 00:28:21.554 { 00:28:21.554 "name": null, 00:28:21.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.554 "is_configured": false, 00:28:21.554 "data_offset": 2048, 00:28:21.554 "data_size": 63488 00:28:21.554 }, 00:28:21.554 { 00:28:21.554 "name": "BaseBdev2", 00:28:21.554 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:21.554 "is_configured": true, 00:28:21.554 "data_offset": 2048, 00:28:21.554 "data_size": 63488 00:28:21.554 } 00:28:21.554 ] 00:28:21.554 }' 00:28:21.554 11:39:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:21.554 11:39:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.120 11:39:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:22.120 11:39:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:22.120 11:39:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:22.120 11:39:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:22.120 11:39:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:22.120 11:39:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.120 11:39:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.378 11:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:22.378 "name": "raid_bdev1", 00:28:22.378 "uuid": "b796284a-058a-4fd5-817c-c7f20b18680c", 00:28:22.378 "strip_size_kb": 0, 00:28:22.378 "state": "online", 00:28:22.378 "raid_level": "raid1", 00:28:22.378 "superblock": true, 00:28:22.378 "num_base_bdevs": 2, 00:28:22.378 "num_base_bdevs_discovered": 1, 00:28:22.378 "num_base_bdevs_operational": 1, 00:28:22.378 "base_bdevs_list": [ 00:28:22.378 { 00:28:22.378 "name": null, 00:28:22.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.378 "is_configured": false, 00:28:22.378 "data_offset": 2048, 00:28:22.378 "data_size": 63488 00:28:22.378 }, 00:28:22.378 { 00:28:22.378 "name": "BaseBdev2", 00:28:22.378 "uuid": "d49aed9e-a8be-5be7-93e7-a49a467743f9", 00:28:22.378 "is_configured": true, 00:28:22.378 "data_offset": 2048, 00:28:22.378 "data_size": 63488 00:28:22.378 } 00:28:22.378 ] 00:28:22.378 }' 00:28:22.378 11:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 145352 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 145352 ']' 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 145352 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145352 00:28:22.637 killing process with pid 145352 00:28:22.637 Received shutdown signal, test time was about 60.000000 seconds 00:28:22.637 00:28:22.637 Latency(us) 00:28:22.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.637 =================================================================================================================== 00:28:22.637 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145352' 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 145352 00:28:22.637 11:39:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 145352 00:28:22.637 [2024-07-13 11:39:57.233573] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:22.637 [2024-07-13 11:39:57.233668] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:22.637 [2024-07-13 11:39:57.233709] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:22.637 [2024-07-13 11:39:57.233726] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:28:22.895 [2024-07-13 11:39:57.432424] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:23.833 ************************************ 00:28:23.833 END TEST raid_rebuild_test_sb 00:28:23.833 ************************************ 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:28:23.833 00:28:23.833 real 0m36.954s 00:28:23.833 user 0m55.465s 00:28:23.833 sys 0m5.117s 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:23.833 11:39:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:23.833 11:39:58 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:28:23.833 11:39:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:23.833 11:39:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:23.833 11:39:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:23.833 ************************************ 00:28:23.833 START TEST raid_rebuild_test_io 00:28:23.833 ************************************ 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=146366 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 146366 /var/tmp/spdk-raid.sock 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 146366 ']' 00:28:23.833 11:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:23.834 11:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:23.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:23.834 11:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:23.834 11:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:23.834 11:39:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:23.834 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:23.834 Zero copy mechanism will not be used. 00:28:23.834 [2024-07-13 11:39:58.579826] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:23.834 [2024-07-13 11:39:58.579980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146366 ] 00:28:24.093 [2024-07-13 11:39:58.736479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.352 [2024-07-13 11:39:58.948286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.611 [2024-07-13 11:39:59.134611] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:24.869 11:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:24.870 11:39:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:28:24.870 11:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:24.870 11:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:25.129 BaseBdev1_malloc 00:28:25.129 11:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:25.387 [2024-07-13 11:39:59.944140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:25.387 [2024-07-13 11:39:59.944260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.387 [2024-07-13 11:39:59.944301] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:28:25.387 [2024-07-13 11:39:59.944323] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.387 [2024-07-13 11:39:59.946701] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.387 [2024-07-13 11:39:59.946743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:25.387 BaseBdev1 00:28:25.387 11:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:25.387 11:39:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:25.646 BaseBdev2_malloc 00:28:25.646 11:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:25.904 [2024-07-13 11:40:00.426734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:25.904 [2024-07-13 11:40:00.426825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.904 [2024-07-13 11:40:00.426877] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:28:25.904 [2024-07-13 11:40:00.426902] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.904 [2024-07-13 11:40:00.429059] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.904 [2024-07-13 11:40:00.429103] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:25.904 BaseBdev2 00:28:25.904 11:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:25.904 spare_malloc 00:28:25.904 11:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:26.163 spare_delay 00:28:26.163 11:40:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:26.421 [2024-07-13 11:40:01.015490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:26.421 [2024-07-13 11:40:01.015569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:26.421 [2024-07-13 11:40:01.015604] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:26.421 [2024-07-13 11:40:01.015631] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:26.421 [2024-07-13 11:40:01.017810] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:26.421 [2024-07-13 11:40:01.017858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:26.421 spare 00:28:26.421 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:26.679 [2024-07-13 11:40:01.211568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:26.679 [2024-07-13 11:40:01.213487] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:26.679 [2024-07-13 11:40:01.213582] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:28:26.679 [2024-07-13 11:40:01.213595] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:26.679 [2024-07-13 11:40:01.213715] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:28:26.679 [2024-07-13 11:40:01.214046] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:28:26.679 [2024-07-13 11:40:01.214067] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:28:26.679 [2024-07-13 11:40:01.214199] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.679 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.938 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.938 "name": "raid_bdev1", 00:28:26.938 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:26.938 "strip_size_kb": 0, 00:28:26.938 "state": "online", 00:28:26.938 "raid_level": "raid1", 00:28:26.938 "superblock": false, 00:28:26.938 "num_base_bdevs": 2, 00:28:26.938 "num_base_bdevs_discovered": 2, 00:28:26.938 "num_base_bdevs_operational": 2, 00:28:26.938 "base_bdevs_list": [ 00:28:26.938 { 00:28:26.938 "name": "BaseBdev1", 00:28:26.938 "uuid": "7ccc32dc-12b9-5017-86e1-da339f3cc781", 00:28:26.938 "is_configured": true, 00:28:26.938 "data_offset": 0, 00:28:26.938 "data_size": 65536 00:28:26.938 }, 00:28:26.938 { 00:28:26.938 "name": "BaseBdev2", 00:28:26.938 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:26.938 "is_configured": true, 00:28:26.938 "data_offset": 0, 00:28:26.938 "data_size": 65536 00:28:26.938 } 00:28:26.938 ] 00:28:26.938 }' 00:28:26.938 11:40:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.938 11:40:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:27.503 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:27.503 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:27.761 [2024-07-13 11:40:02.287982] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:27.761 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:28:27.761 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.761 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:27.761 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:28:27.761 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:28:27.761 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:27.761 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:28.019 [2024-07-13 11:40:02.603045] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:28.019 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:28.019 Zero copy mechanism will not be used. 00:28:28.019 Running I/O for 60 seconds... 00:28:28.019 [2024-07-13 11:40:02.676659] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:28.019 [2024-07-13 11:40:02.682527] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.019 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.277 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:28.277 "name": "raid_bdev1", 00:28:28.277 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:28.277 "strip_size_kb": 0, 00:28:28.277 "state": "online", 00:28:28.277 "raid_level": "raid1", 00:28:28.277 "superblock": false, 00:28:28.277 "num_base_bdevs": 2, 00:28:28.277 "num_base_bdevs_discovered": 1, 00:28:28.277 "num_base_bdevs_operational": 1, 00:28:28.277 "base_bdevs_list": [ 00:28:28.277 { 00:28:28.277 "name": null, 00:28:28.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:28.277 "is_configured": false, 00:28:28.277 "data_offset": 0, 00:28:28.277 "data_size": 65536 00:28:28.277 }, 00:28:28.277 { 00:28:28.277 "name": "BaseBdev2", 00:28:28.277 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:28.277 "is_configured": true, 00:28:28.277 "data_offset": 0, 00:28:28.277 "data_size": 65536 00:28:28.277 } 00:28:28.277 ] 00:28:28.277 }' 00:28:28.277 11:40:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:28.277 11:40:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:29.210 11:40:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:29.210 [2024-07-13 11:40:03.852426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:29.210 11:40:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:29.210 [2024-07-13 11:40:03.904731] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:29.210 [2024-07-13 11:40:03.906666] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:29.469 [2024-07-13 11:40:04.032024] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:29.469 [2024-07-13 11:40:04.032407] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:29.728 [2024-07-13 11:40:04.252321] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:29.728 [2024-07-13 11:40:04.252530] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:29.987 [2024-07-13 11:40:04.582115] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:29.987 [2024-07-13 11:40:04.696276] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:30.246 11:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:30.246 11:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:30.246 11:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:30.246 11:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:30.246 11:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:30.246 11:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.246 11:40:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.505 [2024-07-13 11:40:05.043606] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:30.505 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:30.505 "name": "raid_bdev1", 00:28:30.505 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:30.505 "strip_size_kb": 0, 00:28:30.505 "state": "online", 00:28:30.505 "raid_level": "raid1", 00:28:30.505 "superblock": false, 00:28:30.505 "num_base_bdevs": 2, 00:28:30.505 "num_base_bdevs_discovered": 2, 00:28:30.505 "num_base_bdevs_operational": 2, 00:28:30.505 "process": { 00:28:30.505 "type": "rebuild", 00:28:30.505 "target": "spare", 00:28:30.505 "progress": { 00:28:30.505 "blocks": 14336, 00:28:30.505 "percent": 21 00:28:30.505 } 00:28:30.505 }, 00:28:30.505 "base_bdevs_list": [ 00:28:30.505 { 00:28:30.505 "name": "spare", 00:28:30.505 "uuid": "3765556d-7fe5-5a12-88c5-4907c0468b2c", 00:28:30.505 "is_configured": true, 00:28:30.505 "data_offset": 0, 00:28:30.505 "data_size": 65536 00:28:30.505 }, 00:28:30.505 { 00:28:30.505 "name": "BaseBdev2", 00:28:30.505 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:30.505 "is_configured": true, 00:28:30.505 "data_offset": 0, 00:28:30.505 "data_size": 65536 00:28:30.505 } 00:28:30.505 ] 00:28:30.505 }' 00:28:30.505 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:30.505 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:30.505 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:30.505 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:30.505 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:30.764 [2024-07-13 11:40:05.263221] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:30.764 [2024-07-13 11:40:05.473833] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:31.022 [2024-07-13 11:40:05.611424] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:31.022 [2024-07-13 11:40:05.613124] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.022 [2024-07-13 11:40:05.613156] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:31.022 [2024-07-13 11:40:05.613166] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:31.022 [2024-07-13 11:40:05.646163] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.022 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.281 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:31.281 "name": "raid_bdev1", 00:28:31.281 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:31.281 "strip_size_kb": 0, 00:28:31.281 "state": "online", 00:28:31.281 "raid_level": "raid1", 00:28:31.281 "superblock": false, 00:28:31.281 "num_base_bdevs": 2, 00:28:31.281 "num_base_bdevs_discovered": 1, 00:28:31.281 "num_base_bdevs_operational": 1, 00:28:31.281 "base_bdevs_list": [ 00:28:31.281 { 00:28:31.281 "name": null, 00:28:31.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.281 "is_configured": false, 00:28:31.281 "data_offset": 0, 00:28:31.281 "data_size": 65536 00:28:31.281 }, 00:28:31.281 { 00:28:31.281 "name": "BaseBdev2", 00:28:31.281 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:31.281 "is_configured": true, 00:28:31.281 "data_offset": 0, 00:28:31.281 "data_size": 65536 00:28:31.281 } 00:28:31.281 ] 00:28:31.281 }' 00:28:31.281 11:40:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:31.281 11:40:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:31.848 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:31.848 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:31.848 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:31.848 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:31.848 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:31.848 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.848 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.105 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:32.105 "name": "raid_bdev1", 00:28:32.105 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:32.105 "strip_size_kb": 0, 00:28:32.105 "state": "online", 00:28:32.105 "raid_level": "raid1", 00:28:32.105 "superblock": false, 00:28:32.105 "num_base_bdevs": 2, 00:28:32.105 "num_base_bdevs_discovered": 1, 00:28:32.105 "num_base_bdevs_operational": 1, 00:28:32.105 "base_bdevs_list": [ 00:28:32.105 { 00:28:32.105 "name": null, 00:28:32.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:32.105 "is_configured": false, 00:28:32.105 "data_offset": 0, 00:28:32.105 "data_size": 65536 00:28:32.105 }, 00:28:32.105 { 00:28:32.105 "name": "BaseBdev2", 00:28:32.105 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:32.105 "is_configured": true, 00:28:32.106 "data_offset": 0, 00:28:32.106 "data_size": 65536 00:28:32.106 } 00:28:32.106 ] 00:28:32.106 }' 00:28:32.106 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:32.106 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:32.106 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:32.362 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:32.362 11:40:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:32.619 [2024-07-13 11:40:07.127633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:32.620 [2024-07-13 11:40:07.171709] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:32.620 [2024-07-13 11:40:07.173653] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:32.620 11:40:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:32.620 [2024-07-13 11:40:07.287134] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:32.620 [2024-07-13 11:40:07.287503] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:32.877 [2024-07-13 11:40:07.507415] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:32.877 [2024-07-13 11:40:07.507630] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:33.135 [2024-07-13 11:40:07.754520] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:33.135 [2024-07-13 11:40:07.754907] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:33.450 [2024-07-13 11:40:07.969233] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:33.450 [2024-07-13 11:40:07.969446] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:33.730 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:33.730 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:33.730 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:33.730 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:33.730 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:33.730 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:33.730 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.730 [2024-07-13 11:40:08.326713] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:33.730 [2024-07-13 11:40:08.327159] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:34.002 "name": "raid_bdev1", 00:28:34.002 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:34.002 "strip_size_kb": 0, 00:28:34.002 "state": "online", 00:28:34.002 "raid_level": "raid1", 00:28:34.002 "superblock": false, 00:28:34.002 "num_base_bdevs": 2, 00:28:34.002 "num_base_bdevs_discovered": 2, 00:28:34.002 "num_base_bdevs_operational": 2, 00:28:34.002 "process": { 00:28:34.002 "type": "rebuild", 00:28:34.002 "target": "spare", 00:28:34.002 "progress": { 00:28:34.002 "blocks": 14336, 00:28:34.002 "percent": 21 00:28:34.002 } 00:28:34.002 }, 00:28:34.002 "base_bdevs_list": [ 00:28:34.002 { 00:28:34.002 "name": "spare", 00:28:34.002 "uuid": "3765556d-7fe5-5a12-88c5-4907c0468b2c", 00:28:34.002 "is_configured": true, 00:28:34.002 "data_offset": 0, 00:28:34.002 "data_size": 65536 00:28:34.002 }, 00:28:34.002 { 00:28:34.002 "name": "BaseBdev2", 00:28:34.002 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:34.002 "is_configured": true, 00:28:34.002 "data_offset": 0, 00:28:34.002 "data_size": 65536 00:28:34.002 } 00:28:34.002 ] 00:28:34.002 }' 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=859 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:34.002 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:34.003 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:34.003 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:34.003 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:34.261 [2024-07-13 11:40:08.774274] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:34.261 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:34.261 "name": "raid_bdev1", 00:28:34.261 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:34.261 "strip_size_kb": 0, 00:28:34.261 "state": "online", 00:28:34.261 "raid_level": "raid1", 00:28:34.261 "superblock": false, 00:28:34.261 "num_base_bdevs": 2, 00:28:34.261 "num_base_bdevs_discovered": 2, 00:28:34.261 "num_base_bdevs_operational": 2, 00:28:34.261 "process": { 00:28:34.261 "type": "rebuild", 00:28:34.261 "target": "spare", 00:28:34.261 "progress": { 00:28:34.261 "blocks": 18432, 00:28:34.261 "percent": 28 00:28:34.261 } 00:28:34.261 }, 00:28:34.261 "base_bdevs_list": [ 00:28:34.261 { 00:28:34.261 "name": "spare", 00:28:34.261 "uuid": "3765556d-7fe5-5a12-88c5-4907c0468b2c", 00:28:34.261 "is_configured": true, 00:28:34.261 "data_offset": 0, 00:28:34.261 "data_size": 65536 00:28:34.261 }, 00:28:34.261 { 00:28:34.261 "name": "BaseBdev2", 00:28:34.261 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:34.261 "is_configured": true, 00:28:34.261 "data_offset": 0, 00:28:34.261 "data_size": 65536 00:28:34.261 } 00:28:34.261 ] 00:28:34.261 }' 00:28:34.261 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:34.261 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:34.261 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:34.261 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:34.261 11:40:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:34.826 [2024-07-13 11:40:09.526188] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:28:35.085 [2024-07-13 11:40:09.740286] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:28:35.343 11:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:35.343 11:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:35.343 11:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:35.343 11:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:35.343 11:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:35.343 11:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:35.343 11:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.343 11:40:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.343 [2024-07-13 11:40:10.062348] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:28:35.601 11:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:35.601 "name": "raid_bdev1", 00:28:35.601 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:35.601 "strip_size_kb": 0, 00:28:35.601 "state": "online", 00:28:35.601 "raid_level": "raid1", 00:28:35.601 "superblock": false, 00:28:35.601 "num_base_bdevs": 2, 00:28:35.601 "num_base_bdevs_discovered": 2, 00:28:35.601 "num_base_bdevs_operational": 2, 00:28:35.601 "process": { 00:28:35.601 "type": "rebuild", 00:28:35.601 "target": "spare", 00:28:35.601 "progress": { 00:28:35.601 "blocks": 38912, 00:28:35.601 "percent": 59 00:28:35.601 } 00:28:35.601 }, 00:28:35.601 "base_bdevs_list": [ 00:28:35.601 { 00:28:35.601 "name": "spare", 00:28:35.601 "uuid": "3765556d-7fe5-5a12-88c5-4907c0468b2c", 00:28:35.601 "is_configured": true, 00:28:35.601 "data_offset": 0, 00:28:35.601 "data_size": 65536 00:28:35.601 }, 00:28:35.601 { 00:28:35.601 "name": "BaseBdev2", 00:28:35.601 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:35.601 "is_configured": true, 00:28:35.601 "data_offset": 0, 00:28:35.601 "data_size": 65536 00:28:35.601 } 00:28:35.601 ] 00:28:35.601 }' 00:28:35.601 11:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:35.601 11:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:35.601 11:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:35.601 11:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:35.601 11:40:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:35.860 [2024-07-13 11:40:10.384482] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:28:36.119 [2024-07-13 11:40:10.825150] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:28:36.377 [2024-07-13 11:40:11.045910] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:28:36.636 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:36.636 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:36.636 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:36.636 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:36.636 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:36.636 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:36.636 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.636 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:36.894 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:36.894 "name": "raid_bdev1", 00:28:36.894 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:36.895 "strip_size_kb": 0, 00:28:36.895 "state": "online", 00:28:36.895 "raid_level": "raid1", 00:28:36.895 "superblock": false, 00:28:36.895 "num_base_bdevs": 2, 00:28:36.895 "num_base_bdevs_discovered": 2, 00:28:36.895 "num_base_bdevs_operational": 2, 00:28:36.895 "process": { 00:28:36.895 "type": "rebuild", 00:28:36.895 "target": "spare", 00:28:36.895 "progress": { 00:28:36.895 "blocks": 61440, 00:28:36.895 "percent": 93 00:28:36.895 } 00:28:36.895 }, 00:28:36.895 "base_bdevs_list": [ 00:28:36.895 { 00:28:36.895 "name": "spare", 00:28:36.895 "uuid": "3765556d-7fe5-5a12-88c5-4907c0468b2c", 00:28:36.895 "is_configured": true, 00:28:36.895 "data_offset": 0, 00:28:36.895 "data_size": 65536 00:28:36.895 }, 00:28:36.895 { 00:28:36.895 "name": "BaseBdev2", 00:28:36.895 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:36.895 "is_configured": true, 00:28:36.895 "data_offset": 0, 00:28:36.895 "data_size": 65536 00:28:36.895 } 00:28:36.895 ] 00:28:36.895 }' 00:28:36.895 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:36.895 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:36.895 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:36.895 [2024-07-13 11:40:11.585243] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:36.895 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:36.895 11:40:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:37.154 [2024-07-13 11:40:11.690976] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:37.154 [2024-07-13 11:40:11.692701] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.090 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:38.090 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:38.090 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.090 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:38.090 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:38.090 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.090 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.090 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.347 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.347 "name": "raid_bdev1", 00:28:38.347 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:38.347 "strip_size_kb": 0, 00:28:38.347 "state": "online", 00:28:38.347 "raid_level": "raid1", 00:28:38.347 "superblock": false, 00:28:38.347 "num_base_bdevs": 2, 00:28:38.347 "num_base_bdevs_discovered": 2, 00:28:38.347 "num_base_bdevs_operational": 2, 00:28:38.347 "base_bdevs_list": [ 00:28:38.347 { 00:28:38.347 "name": "spare", 00:28:38.348 "uuid": "3765556d-7fe5-5a12-88c5-4907c0468b2c", 00:28:38.348 "is_configured": true, 00:28:38.348 "data_offset": 0, 00:28:38.348 "data_size": 65536 00:28:38.348 }, 00:28:38.348 { 00:28:38.348 "name": "BaseBdev2", 00:28:38.348 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:38.348 "is_configured": true, 00:28:38.348 "data_offset": 0, 00:28:38.348 "data_size": 65536 00:28:38.348 } 00:28:38.348 ] 00:28:38.348 }' 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.348 11:40:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:38.605 "name": "raid_bdev1", 00:28:38.605 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:38.605 "strip_size_kb": 0, 00:28:38.605 "state": "online", 00:28:38.605 "raid_level": "raid1", 00:28:38.605 "superblock": false, 00:28:38.605 "num_base_bdevs": 2, 00:28:38.605 "num_base_bdevs_discovered": 2, 00:28:38.605 "num_base_bdevs_operational": 2, 00:28:38.605 "base_bdevs_list": [ 00:28:38.605 { 00:28:38.605 "name": "spare", 00:28:38.605 "uuid": "3765556d-7fe5-5a12-88c5-4907c0468b2c", 00:28:38.605 "is_configured": true, 00:28:38.605 "data_offset": 0, 00:28:38.605 "data_size": 65536 00:28:38.605 }, 00:28:38.605 { 00:28:38.605 "name": "BaseBdev2", 00:28:38.605 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:38.605 "is_configured": true, 00:28:38.605 "data_offset": 0, 00:28:38.605 "data_size": 65536 00:28:38.605 } 00:28:38.605 ] 00:28:38.605 }' 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.605 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.863 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.863 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.863 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.863 "name": "raid_bdev1", 00:28:38.863 "uuid": "3455fb56-3179-45a7-b818-ca16c48702a5", 00:28:38.863 "strip_size_kb": 0, 00:28:38.863 "state": "online", 00:28:38.863 "raid_level": "raid1", 00:28:38.863 "superblock": false, 00:28:38.863 "num_base_bdevs": 2, 00:28:38.863 "num_base_bdevs_discovered": 2, 00:28:38.863 "num_base_bdevs_operational": 2, 00:28:38.863 "base_bdevs_list": [ 00:28:38.863 { 00:28:38.863 "name": "spare", 00:28:38.863 "uuid": "3765556d-7fe5-5a12-88c5-4907c0468b2c", 00:28:38.863 "is_configured": true, 00:28:38.863 "data_offset": 0, 00:28:38.863 "data_size": 65536 00:28:38.863 }, 00:28:38.863 { 00:28:38.863 "name": "BaseBdev2", 00:28:38.863 "uuid": "8086e7f8-5d61-5876-8e17-3fa0e320c639", 00:28:38.863 "is_configured": true, 00:28:38.863 "data_offset": 0, 00:28:38.863 "data_size": 65536 00:28:38.863 } 00:28:38.863 ] 00:28:38.863 }' 00:28:38.863 11:40:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.863 11:40:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:39.797 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:39.797 [2024-07-13 11:40:14.521201] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:39.797 [2024-07-13 11:40:14.521262] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:40.054 00:28:40.054 Latency(us) 00:28:40.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.054 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:40.054 raid_bdev1 : 11.99 120.62 361.87 0.00 0.00 11561.48 303.48 112483.61 00:28:40.054 =================================================================================================================== 00:28:40.054 Total : 120.62 361.87 0.00 0.00 11561.48 303.48 112483.61 00:28:40.054 [2024-07-13 11:40:14.607974] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:40.054 [2024-07-13 11:40:14.608015] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:40.054 [2024-07-13 11:40:14.608105] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:40.055 [2024-07-13 11:40:14.608119] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:28:40.055 0 00:28:40.055 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.055 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:28:40.312 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:40.312 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:40.312 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:40.312 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:40.312 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:40.312 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:40.312 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:40.312 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:40.312 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:40.313 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:40.313 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:40.313 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.313 11:40:14 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:40.570 /dev/nbd0 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:40.570 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:40.571 1+0 records in 00:28:40.571 1+0 records out 00:28:40.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376429 s, 10.9 MB/s 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:40.571 /dev/nbd1 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:40.571 1+0 records in 00:28:40.571 1+0 records out 00:28:40.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000835227 s, 4.9 MB/s 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.571 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:40.828 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:40.828 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:40.828 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:40.828 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:40.828 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:40.828 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:40.828 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:41.085 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:41.343 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:41.343 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:41.343 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:41.343 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:41.343 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:41.343 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:41.343 11:40:15 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:41.343 11:40:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:41.343 11:40:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:41.343 11:40:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:41.601 11:40:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 146366 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 146366 ']' 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 146366 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146366 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146366' 00:28:41.602 killing process with pid 146366 00:28:41.602 Received shutdown signal, test time was about 13.511338 seconds 00:28:41.602 00:28:41.602 Latency(us) 00:28:41.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.602 =================================================================================================================== 00:28:41.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 146366 00:28:41.602 11:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 146366 00:28:41.602 [2024-07-13 11:40:16.116684] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:41.602 [2024-07-13 11:40:16.270597] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:42.986 ************************************ 00:28:42.986 END TEST raid_rebuild_test_io 00:28:42.986 ************************************ 00:28:42.986 11:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:28:42.986 00:28:42.986 real 0m18.814s 00:28:42.986 user 0m28.939s 00:28:42.986 sys 0m1.816s 00:28:42.986 11:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:42.986 11:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:42.986 11:40:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:42.986 11:40:17 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:28:42.986 11:40:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:42.986 11:40:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.986 11:40:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:42.986 ************************************ 00:28:42.986 START TEST raid_rebuild_test_sb_io 00:28:42.986 ************************************ 00:28:42.986 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:28:42.986 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:42.986 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:42.986 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:42.986 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=146872 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 146872 /var/tmp/spdk-raid.sock 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 146872 ']' 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:42.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:42.987 11:40:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:42.987 [2024-07-13 11:40:17.458815] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:28:42.987 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:42.987 Zero copy mechanism will not be used. 00:28:42.987 [2024-07-13 11:40:17.459035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146872 ] 00:28:42.987 [2024-07-13 11:40:17.626294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.245 [2024-07-13 11:40:17.809245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.245 [2024-07-13 11:40:17.994441] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:43.812 11:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:43.812 11:40:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:28:43.812 11:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:43.812 11:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:44.070 BaseBdev1_malloc 00:28:44.070 11:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:44.329 [2024-07-13 11:40:18.858105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:44.329 [2024-07-13 11:40:18.858350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:44.329 [2024-07-13 11:40:18.858502] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:28:44.329 [2024-07-13 11:40:18.858529] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:44.329 [2024-07-13 11:40:18.863647] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:44.329 [2024-07-13 11:40:18.863705] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:44.329 BaseBdev1 00:28:44.329 11:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:44.329 11:40:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:44.588 BaseBdev2_malloc 00:28:44.588 11:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:44.846 [2024-07-13 11:40:19.414705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:44.846 [2024-07-13 11:40:19.414798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:44.846 [2024-07-13 11:40:19.414836] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:28:44.846 [2024-07-13 11:40:19.414869] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:44.846 [2024-07-13 11:40:19.417031] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:44.846 [2024-07-13 11:40:19.417077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:44.846 BaseBdev2 00:28:44.846 11:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:45.104 spare_malloc 00:28:45.105 11:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:45.363 spare_delay 00:28:45.363 11:40:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:45.622 [2024-07-13 11:40:20.163429] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:45.622 [2024-07-13 11:40:20.163515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:45.622 [2024-07-13 11:40:20.163548] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:45.622 [2024-07-13 11:40:20.163574] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:45.622 [2024-07-13 11:40:20.165807] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:45.622 [2024-07-13 11:40:20.165863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:45.622 spare 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:45.622 [2024-07-13 11:40:20.351513] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:45.622 [2024-07-13 11:40:20.353377] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:45.622 [2024-07-13 11:40:20.353569] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:28:45.622 [2024-07-13 11:40:20.353583] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:45.622 [2024-07-13 11:40:20.353706] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:28:45.622 [2024-07-13 11:40:20.354047] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:28:45.622 [2024-07-13 11:40:20.354069] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:28:45.622 [2024-07-13 11:40:20.354204] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.622 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.881 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:45.881 "name": "raid_bdev1", 00:28:45.881 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:45.881 "strip_size_kb": 0, 00:28:45.881 "state": "online", 00:28:45.881 "raid_level": "raid1", 00:28:45.881 "superblock": true, 00:28:45.881 "num_base_bdevs": 2, 00:28:45.881 "num_base_bdevs_discovered": 2, 00:28:45.881 "num_base_bdevs_operational": 2, 00:28:45.881 "base_bdevs_list": [ 00:28:45.881 { 00:28:45.881 "name": "BaseBdev1", 00:28:45.881 "uuid": "0f5c90a8-6f82-532e-9dba-2a5ce925c357", 00:28:45.881 "is_configured": true, 00:28:45.881 "data_offset": 2048, 00:28:45.881 "data_size": 63488 00:28:45.881 }, 00:28:45.881 { 00:28:45.881 "name": "BaseBdev2", 00:28:45.881 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:45.881 "is_configured": true, 00:28:45.881 "data_offset": 2048, 00:28:45.881 "data_size": 63488 00:28:45.881 } 00:28:45.881 ] 00:28:45.881 }' 00:28:45.881 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:45.881 11:40:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:46.448 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:46.448 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:46.706 [2024-07-13 11:40:21.456009] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:46.965 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:28:46.965 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.965 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:46.965 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:28:46.965 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:28:46.965 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:46.965 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:47.224 [2024-07-13 11:40:21.759015] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:47.224 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:47.224 Zero copy mechanism will not be used. 00:28:47.224 Running I/O for 60 seconds... 00:28:47.224 [2024-07-13 11:40:21.898077] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:47.224 [2024-07-13 11:40:21.909781] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.224 11:40:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.483 11:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:47.483 "name": "raid_bdev1", 00:28:47.483 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:47.483 "strip_size_kb": 0, 00:28:47.483 "state": "online", 00:28:47.483 "raid_level": "raid1", 00:28:47.483 "superblock": true, 00:28:47.483 "num_base_bdevs": 2, 00:28:47.483 "num_base_bdevs_discovered": 1, 00:28:47.483 "num_base_bdevs_operational": 1, 00:28:47.483 "base_bdevs_list": [ 00:28:47.483 { 00:28:47.483 "name": null, 00:28:47.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:47.483 "is_configured": false, 00:28:47.483 "data_offset": 2048, 00:28:47.483 "data_size": 63488 00:28:47.483 }, 00:28:47.483 { 00:28:47.483 "name": "BaseBdev2", 00:28:47.483 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:47.483 "is_configured": true, 00:28:47.483 "data_offset": 2048, 00:28:47.483 "data_size": 63488 00:28:47.483 } 00:28:47.483 ] 00:28:47.483 }' 00:28:47.483 11:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:47.483 11:40:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.420 11:40:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:48.420 [2024-07-13 11:40:23.085323] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:48.420 11:40:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:48.420 [2024-07-13 11:40:23.143634] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:48.420 [2024-07-13 11:40:23.145586] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:48.677 [2024-07-13 11:40:23.253787] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:48.677 [2024-07-13 11:40:23.254178] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:48.935 [2024-07-13 11:40:23.468618] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:48.935 [2024-07-13 11:40:23.468790] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:49.193 [2024-07-13 11:40:23.804883] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:49.452 [2024-07-13 11:40:24.024268] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:49.452 [2024-07-13 11:40:24.024461] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:49.452 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:49.452 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:49.452 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:49.452 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:49.452 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:49.452 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.452 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.710 [2024-07-13 11:40:24.283012] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:49.710 [2024-07-13 11:40:24.283282] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:49.710 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:49.710 "name": "raid_bdev1", 00:28:49.710 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:49.710 "strip_size_kb": 0, 00:28:49.710 "state": "online", 00:28:49.710 "raid_level": "raid1", 00:28:49.710 "superblock": true, 00:28:49.710 "num_base_bdevs": 2, 00:28:49.710 "num_base_bdevs_discovered": 2, 00:28:49.710 "num_base_bdevs_operational": 2, 00:28:49.710 "process": { 00:28:49.710 "type": "rebuild", 00:28:49.710 "target": "spare", 00:28:49.710 "progress": { 00:28:49.710 "blocks": 14336, 00:28:49.710 "percent": 22 00:28:49.710 } 00:28:49.710 }, 00:28:49.710 "base_bdevs_list": [ 00:28:49.710 { 00:28:49.710 "name": "spare", 00:28:49.710 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:28:49.710 "is_configured": true, 00:28:49.710 "data_offset": 2048, 00:28:49.710 "data_size": 63488 00:28:49.710 }, 00:28:49.710 { 00:28:49.710 "name": "BaseBdev2", 00:28:49.710 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:49.710 "is_configured": true, 00:28:49.710 "data_offset": 2048, 00:28:49.710 "data_size": 63488 00:28:49.710 } 00:28:49.710 ] 00:28:49.710 }' 00:28:49.710 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:49.710 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:49.710 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:49.710 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:49.710 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:49.969 [2024-07-13 11:40:24.487053] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:49.969 [2024-07-13 11:40:24.674330] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.227 [2024-07-13 11:40:24.795857] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:50.227 [2024-07-13 11:40:24.809416] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:50.227 [2024-07-13 11:40:24.809449] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.227 [2024-07-13 11:40:24.809459] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:50.227 [2024-07-13 11:40:24.847434] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.227 11:40:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.485 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:50.485 "name": "raid_bdev1", 00:28:50.485 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:50.485 "strip_size_kb": 0, 00:28:50.485 "state": "online", 00:28:50.485 "raid_level": "raid1", 00:28:50.485 "superblock": true, 00:28:50.485 "num_base_bdevs": 2, 00:28:50.485 "num_base_bdevs_discovered": 1, 00:28:50.485 "num_base_bdevs_operational": 1, 00:28:50.485 "base_bdevs_list": [ 00:28:50.485 { 00:28:50.485 "name": null, 00:28:50.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.485 "is_configured": false, 00:28:50.485 "data_offset": 2048, 00:28:50.485 "data_size": 63488 00:28:50.485 }, 00:28:50.485 { 00:28:50.485 "name": "BaseBdev2", 00:28:50.485 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:50.485 "is_configured": true, 00:28:50.485 "data_offset": 2048, 00:28:50.485 "data_size": 63488 00:28:50.485 } 00:28:50.485 ] 00:28:50.485 }' 00:28:50.485 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:50.486 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:51.051 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:51.051 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:51.051 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:51.051 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:51.051 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:51.051 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.051 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.309 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:51.309 "name": "raid_bdev1", 00:28:51.309 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:51.309 "strip_size_kb": 0, 00:28:51.309 "state": "online", 00:28:51.309 "raid_level": "raid1", 00:28:51.309 "superblock": true, 00:28:51.309 "num_base_bdevs": 2, 00:28:51.309 "num_base_bdevs_discovered": 1, 00:28:51.309 "num_base_bdevs_operational": 1, 00:28:51.309 "base_bdevs_list": [ 00:28:51.309 { 00:28:51.309 "name": null, 00:28:51.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.309 "is_configured": false, 00:28:51.309 "data_offset": 2048, 00:28:51.309 "data_size": 63488 00:28:51.309 }, 00:28:51.309 { 00:28:51.309 "name": "BaseBdev2", 00:28:51.309 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:51.309 "is_configured": true, 00:28:51.309 "data_offset": 2048, 00:28:51.309 "data_size": 63488 00:28:51.309 } 00:28:51.309 ] 00:28:51.309 }' 00:28:51.309 11:40:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:51.309 11:40:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:51.309 11:40:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:51.566 11:40:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:51.566 11:40:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:51.824 [2024-07-13 11:40:26.334124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:51.824 11:40:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:51.824 [2024-07-13 11:40:26.407259] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:51.824 [2024-07-13 11:40:26.408930] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:51.824 [2024-07-13 11:40:26.535148] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:51.824 [2024-07-13 11:40:26.535402] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:52.082 [2024-07-13 11:40:26.743355] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:52.082 [2024-07-13 11:40:26.743528] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:52.340 [2024-07-13 11:40:26.983856] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:52.597 [2024-07-13 11:40:27.104625] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:52.597 [2024-07-13 11:40:27.326746] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:52.855 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.855 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:52.855 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:52.855 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:52.856 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:52.856 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.856 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.856 [2024-07-13 11:40:27.435479] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:52.856 [2024-07-13 11:40:27.435706] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.114 "name": "raid_bdev1", 00:28:53.114 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:53.114 "strip_size_kb": 0, 00:28:53.114 "state": "online", 00:28:53.114 "raid_level": "raid1", 00:28:53.114 "superblock": true, 00:28:53.114 "num_base_bdevs": 2, 00:28:53.114 "num_base_bdevs_discovered": 2, 00:28:53.114 "num_base_bdevs_operational": 2, 00:28:53.114 "process": { 00:28:53.114 "type": "rebuild", 00:28:53.114 "target": "spare", 00:28:53.114 "progress": { 00:28:53.114 "blocks": 18432, 00:28:53.114 "percent": 29 00:28:53.114 } 00:28:53.114 }, 00:28:53.114 "base_bdevs_list": [ 00:28:53.114 { 00:28:53.114 "name": "spare", 00:28:53.114 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:28:53.114 "is_configured": true, 00:28:53.114 "data_offset": 2048, 00:28:53.114 "data_size": 63488 00:28:53.114 }, 00:28:53.114 { 00:28:53.114 "name": "BaseBdev2", 00:28:53.114 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:53.114 "is_configured": true, 00:28:53.114 "data_offset": 2048, 00:28:53.114 "data_size": 63488 00:28:53.114 } 00:28:53.114 ] 00:28:53.114 }' 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.114 [2024-07-13 11:40:27.669012] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:53.114 [2024-07-13 11:40:27.669359] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:53.114 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=878 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.114 11:40:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.372 [2024-07-13 11:40:27.882400] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:53.372 [2024-07-13 11:40:27.882543] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:53.372 11:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.372 "name": "raid_bdev1", 00:28:53.372 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:53.372 "strip_size_kb": 0, 00:28:53.372 "state": "online", 00:28:53.372 "raid_level": "raid1", 00:28:53.372 "superblock": true, 00:28:53.372 "num_base_bdevs": 2, 00:28:53.372 "num_base_bdevs_discovered": 2, 00:28:53.372 "num_base_bdevs_operational": 2, 00:28:53.372 "process": { 00:28:53.372 "type": "rebuild", 00:28:53.372 "target": "spare", 00:28:53.372 "progress": { 00:28:53.372 "blocks": 22528, 00:28:53.372 "percent": 35 00:28:53.372 } 00:28:53.372 }, 00:28:53.372 "base_bdevs_list": [ 00:28:53.372 { 00:28:53.372 "name": "spare", 00:28:53.372 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:28:53.372 "is_configured": true, 00:28:53.372 "data_offset": 2048, 00:28:53.372 "data_size": 63488 00:28:53.372 }, 00:28:53.372 { 00:28:53.372 "name": "BaseBdev2", 00:28:53.372 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:53.372 "is_configured": true, 00:28:53.372 "data_offset": 2048, 00:28:53.372 "data_size": 63488 00:28:53.372 } 00:28:53.372 ] 00:28:53.372 }' 00:28:53.372 11:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.372 11:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.372 11:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.372 11:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.372 11:40:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:53.630 [2024-07-13 11:40:28.212726] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:54.565 [2024-07-13 11:40:28.986416] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:54.565 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:54.565 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:54.565 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:54.565 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:54.565 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:54.565 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:54.565 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.565 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.565 [2024-07-13 11:40:29.215187] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:28:54.565 [2024-07-13 11:40:29.215606] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:28:54.823 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:54.823 "name": "raid_bdev1", 00:28:54.823 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:54.823 "strip_size_kb": 0, 00:28:54.823 "state": "online", 00:28:54.823 "raid_level": "raid1", 00:28:54.823 "superblock": true, 00:28:54.823 "num_base_bdevs": 2, 00:28:54.823 "num_base_bdevs_discovered": 2, 00:28:54.823 "num_base_bdevs_operational": 2, 00:28:54.823 "process": { 00:28:54.823 "type": "rebuild", 00:28:54.823 "target": "spare", 00:28:54.823 "progress": { 00:28:54.823 "blocks": 45056, 00:28:54.823 "percent": 70 00:28:54.823 } 00:28:54.823 }, 00:28:54.823 "base_bdevs_list": [ 00:28:54.823 { 00:28:54.823 "name": "spare", 00:28:54.823 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:28:54.823 "is_configured": true, 00:28:54.823 "data_offset": 2048, 00:28:54.823 "data_size": 63488 00:28:54.823 }, 00:28:54.823 { 00:28:54.823 "name": "BaseBdev2", 00:28:54.823 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:54.823 "is_configured": true, 00:28:54.823 "data_offset": 2048, 00:28:54.823 "data_size": 63488 00:28:54.823 } 00:28:54.823 ] 00:28:54.823 }' 00:28:54.823 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:54.823 [2024-07-13 11:40:29.418333] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:28:54.823 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:54.823 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:54.823 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:54.823 11:40:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:55.081 [2024-07-13 11:40:29.821991] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:28:56.016 [2024-07-13 11:40:30.427567] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.016 [2024-07-13 11:40:30.507897] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:56.016 [2024-07-13 11:40:30.515766] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:56.016 "name": "raid_bdev1", 00:28:56.016 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:56.016 "strip_size_kb": 0, 00:28:56.016 "state": "online", 00:28:56.016 "raid_level": "raid1", 00:28:56.016 "superblock": true, 00:28:56.016 "num_base_bdevs": 2, 00:28:56.016 "num_base_bdevs_discovered": 2, 00:28:56.016 "num_base_bdevs_operational": 2, 00:28:56.016 "base_bdevs_list": [ 00:28:56.016 { 00:28:56.016 "name": "spare", 00:28:56.016 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:28:56.016 "is_configured": true, 00:28:56.016 "data_offset": 2048, 00:28:56.016 "data_size": 63488 00:28:56.016 }, 00:28:56.016 { 00:28:56.016 "name": "BaseBdev2", 00:28:56.016 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:56.016 "is_configured": true, 00:28:56.016 "data_offset": 2048, 00:28:56.016 "data_size": 63488 00:28:56.016 } 00:28:56.016 ] 00:28:56.016 }' 00:28:56.016 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.274 11:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:56.531 "name": "raid_bdev1", 00:28:56.531 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:56.531 "strip_size_kb": 0, 00:28:56.531 "state": "online", 00:28:56.531 "raid_level": "raid1", 00:28:56.531 "superblock": true, 00:28:56.531 "num_base_bdevs": 2, 00:28:56.531 "num_base_bdevs_discovered": 2, 00:28:56.531 "num_base_bdevs_operational": 2, 00:28:56.531 "base_bdevs_list": [ 00:28:56.531 { 00:28:56.531 "name": "spare", 00:28:56.531 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:28:56.531 "is_configured": true, 00:28:56.531 "data_offset": 2048, 00:28:56.531 "data_size": 63488 00:28:56.531 }, 00:28:56.531 { 00:28:56.531 "name": "BaseBdev2", 00:28:56.531 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:56.531 "is_configured": true, 00:28:56.531 "data_offset": 2048, 00:28:56.531 "data_size": 63488 00:28:56.531 } 00:28:56.531 ] 00:28:56.531 }' 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.531 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.789 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:56.789 "name": "raid_bdev1", 00:28:56.789 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:28:56.789 "strip_size_kb": 0, 00:28:56.789 "state": "online", 00:28:56.789 "raid_level": "raid1", 00:28:56.789 "superblock": true, 00:28:56.789 "num_base_bdevs": 2, 00:28:56.789 "num_base_bdevs_discovered": 2, 00:28:56.789 "num_base_bdevs_operational": 2, 00:28:56.789 "base_bdevs_list": [ 00:28:56.789 { 00:28:56.789 "name": "spare", 00:28:56.789 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:28:56.789 "is_configured": true, 00:28:56.789 "data_offset": 2048, 00:28:56.789 "data_size": 63488 00:28:56.789 }, 00:28:56.789 { 00:28:56.789 "name": "BaseBdev2", 00:28:56.789 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:28:56.789 "is_configured": true, 00:28:56.789 "data_offset": 2048, 00:28:56.789 "data_size": 63488 00:28:56.789 } 00:28:56.789 ] 00:28:56.789 }' 00:28:56.790 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:56.790 11:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.722 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:57.722 [2024-07-13 11:40:32.296944] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:57.722 [2024-07-13 11:40:32.297004] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:57.722 00:28:57.722 Latency(us) 00:28:57.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.722 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:57.722 raid_bdev1 : 10.60 127.98 383.93 0.00 0.00 10062.76 305.34 113436.86 00:28:57.722 =================================================================================================================== 00:28:57.722 Total : 127.98 383.93 0.00 0.00 10062.76 305.34 113436.86 00:28:57.722 [2024-07-13 11:40:32.371678] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:57.722 [2024-07-13 11:40:32.371718] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:57.722 [2024-07-13 11:40:32.371798] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:57.722 [2024-07-13 11:40:32.371810] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:28:57.722 0 00:28:57.722 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.722 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:57.981 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:58.239 /dev/nbd0 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:58.239 1+0 records in 00:28:58.239 1+0 records out 00:28:58.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022505 s, 18.2 MB/s 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:58.239 11:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:58.496 /dev/nbd1 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:58.496 1+0 records in 00:28:58.496 1+0 records out 00:28:58.496 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479124 s, 8.5 MB/s 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:28:58.496 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:58.497 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:58.497 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:28:58.497 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:58.497 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:58.497 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:58.754 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:58.754 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:58.754 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:58.754 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:58.754 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:58.754 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:58.755 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:59.013 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:59.271 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:59.272 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:59.272 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:59.272 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:59.272 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:59.272 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:59.272 11:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:28:59.530 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:28:59.530 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:59.530 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:59.530 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:59.530 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:59.530 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:59.530 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:59.530 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:59.789 [2024-07-13 11:40:34.449434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:59.789 [2024-07-13 11:40:34.449518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:59.789 [2024-07-13 11:40:34.449572] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:28:59.789 [2024-07-13 11:40:34.449604] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:59.789 [2024-07-13 11:40:34.451915] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:59.789 [2024-07-13 11:40:34.451972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:59.789 [2024-07-13 11:40:34.452079] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:59.789 [2024-07-13 11:40:34.452156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:59.789 [2024-07-13 11:40:34.452328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:59.789 spare 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.789 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.048 [2024-07-13 11:40:34.552435] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:29:00.048 [2024-07-13 11:40:34.552457] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:00.048 [2024-07-13 11:40:34.552577] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:29:00.048 [2024-07-13 11:40:34.552946] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:29:00.048 [2024-07-13 11:40:34.552968] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:29:00.048 [2024-07-13 11:40:34.553093] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:00.048 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:00.048 "name": "raid_bdev1", 00:29:00.048 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:00.048 "strip_size_kb": 0, 00:29:00.048 "state": "online", 00:29:00.048 "raid_level": "raid1", 00:29:00.048 "superblock": true, 00:29:00.048 "num_base_bdevs": 2, 00:29:00.048 "num_base_bdevs_discovered": 2, 00:29:00.048 "num_base_bdevs_operational": 2, 00:29:00.048 "base_bdevs_list": [ 00:29:00.048 { 00:29:00.048 "name": "spare", 00:29:00.048 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:29:00.048 "is_configured": true, 00:29:00.048 "data_offset": 2048, 00:29:00.048 "data_size": 63488 00:29:00.048 }, 00:29:00.048 { 00:29:00.048 "name": "BaseBdev2", 00:29:00.048 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:00.048 "is_configured": true, 00:29:00.048 "data_offset": 2048, 00:29:00.048 "data_size": 63488 00:29:00.048 } 00:29:00.048 ] 00:29:00.048 }' 00:29:00.048 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:00.048 11:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.615 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:00.615 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:00.615 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:00.615 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:00.615 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:00.615 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.615 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.875 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:00.875 "name": "raid_bdev1", 00:29:00.875 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:00.875 "strip_size_kb": 0, 00:29:00.875 "state": "online", 00:29:00.875 "raid_level": "raid1", 00:29:00.875 "superblock": true, 00:29:00.875 "num_base_bdevs": 2, 00:29:00.875 "num_base_bdevs_discovered": 2, 00:29:00.875 "num_base_bdevs_operational": 2, 00:29:00.875 "base_bdevs_list": [ 00:29:00.875 { 00:29:00.875 "name": "spare", 00:29:00.875 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:29:00.875 "is_configured": true, 00:29:00.875 "data_offset": 2048, 00:29:00.875 "data_size": 63488 00:29:00.875 }, 00:29:00.875 { 00:29:00.875 "name": "BaseBdev2", 00:29:00.875 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:00.875 "is_configured": true, 00:29:00.875 "data_offset": 2048, 00:29:00.875 "data_size": 63488 00:29:00.875 } 00:29:00.875 ] 00:29:00.875 }' 00:29:00.875 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:00.875 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:00.875 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:00.875 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:00.875 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.875 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:01.134 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:01.134 11:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:01.392 [2024-07-13 11:40:36.009930] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:01.392 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.393 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.651 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:01.651 "name": "raid_bdev1", 00:29:01.651 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:01.651 "strip_size_kb": 0, 00:29:01.651 "state": "online", 00:29:01.651 "raid_level": "raid1", 00:29:01.651 "superblock": true, 00:29:01.651 "num_base_bdevs": 2, 00:29:01.651 "num_base_bdevs_discovered": 1, 00:29:01.651 "num_base_bdevs_operational": 1, 00:29:01.651 "base_bdevs_list": [ 00:29:01.651 { 00:29:01.651 "name": null, 00:29:01.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.651 "is_configured": false, 00:29:01.651 "data_offset": 2048, 00:29:01.651 "data_size": 63488 00:29:01.651 }, 00:29:01.651 { 00:29:01.651 "name": "BaseBdev2", 00:29:01.651 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:01.651 "is_configured": true, 00:29:01.651 "data_offset": 2048, 00:29:01.651 "data_size": 63488 00:29:01.651 } 00:29:01.651 ] 00:29:01.651 }' 00:29:01.652 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:01.652 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:02.230 11:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:02.526 [2024-07-13 11:40:36.982247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:02.526 [2024-07-13 11:40:36.982380] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:02.526 [2024-07-13 11:40:36.982399] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:02.526 [2024-07-13 11:40:36.982457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:02.526 [2024-07-13 11:40:36.996354] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d2f0 00:29:02.526 [2024-07-13 11:40:36.998220] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:02.526 11:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:03.491 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:03.491 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:03.491 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:03.491 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:03.491 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:03.491 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.491 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.749 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:03.749 "name": "raid_bdev1", 00:29:03.749 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:03.749 "strip_size_kb": 0, 00:29:03.749 "state": "online", 00:29:03.749 "raid_level": "raid1", 00:29:03.749 "superblock": true, 00:29:03.749 "num_base_bdevs": 2, 00:29:03.749 "num_base_bdevs_discovered": 2, 00:29:03.749 "num_base_bdevs_operational": 2, 00:29:03.749 "process": { 00:29:03.749 "type": "rebuild", 00:29:03.749 "target": "spare", 00:29:03.749 "progress": { 00:29:03.749 "blocks": 24576, 00:29:03.749 "percent": 38 00:29:03.749 } 00:29:03.749 }, 00:29:03.749 "base_bdevs_list": [ 00:29:03.749 { 00:29:03.749 "name": "spare", 00:29:03.749 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:29:03.749 "is_configured": true, 00:29:03.749 "data_offset": 2048, 00:29:03.749 "data_size": 63488 00:29:03.749 }, 00:29:03.749 { 00:29:03.749 "name": "BaseBdev2", 00:29:03.749 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:03.749 "is_configured": true, 00:29:03.749 "data_offset": 2048, 00:29:03.749 "data_size": 63488 00:29:03.749 } 00:29:03.749 ] 00:29:03.749 }' 00:29:03.749 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:03.749 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:03.749 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:03.749 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:03.749 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:04.008 [2024-07-13 11:40:38.540917] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:04.008 [2024-07-13 11:40:38.608317] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:04.008 [2024-07-13 11:40:38.608400] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:04.008 [2024-07-13 11:40:38.608419] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:04.008 [2024-07-13 11:40:38.608427] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.008 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.266 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:04.266 "name": "raid_bdev1", 00:29:04.266 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:04.266 "strip_size_kb": 0, 00:29:04.266 "state": "online", 00:29:04.266 "raid_level": "raid1", 00:29:04.266 "superblock": true, 00:29:04.266 "num_base_bdevs": 2, 00:29:04.266 "num_base_bdevs_discovered": 1, 00:29:04.266 "num_base_bdevs_operational": 1, 00:29:04.266 "base_bdevs_list": [ 00:29:04.266 { 00:29:04.266 "name": null, 00:29:04.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.266 "is_configured": false, 00:29:04.266 "data_offset": 2048, 00:29:04.266 "data_size": 63488 00:29:04.266 }, 00:29:04.266 { 00:29:04.266 "name": "BaseBdev2", 00:29:04.266 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:04.266 "is_configured": true, 00:29:04.266 "data_offset": 2048, 00:29:04.266 "data_size": 63488 00:29:04.266 } 00:29:04.266 ] 00:29:04.266 }' 00:29:04.266 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:04.266 11:40:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:04.832 11:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:05.090 [2024-07-13 11:40:39.676728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:05.090 [2024-07-13 11:40:39.676797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:05.090 [2024-07-13 11:40:39.676836] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:05.090 [2024-07-13 11:40:39.676864] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:05.090 [2024-07-13 11:40:39.677383] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:05.090 [2024-07-13 11:40:39.677429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:05.090 [2024-07-13 11:40:39.677496] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:05.090 [2024-07-13 11:40:39.677512] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:05.090 [2024-07-13 11:40:39.677521] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:05.090 [2024-07-13 11:40:39.677563] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:05.090 [2024-07-13 11:40:39.687627] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d630 00:29:05.090 spare 00:29:05.090 [2024-07-13 11:40:39.689331] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:05.090 11:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:06.023 11:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:06.023 11:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:06.023 11:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:06.023 11:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:06.023 11:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:06.023 11:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.023 11:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.282 11:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:06.282 "name": "raid_bdev1", 00:29:06.282 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:06.282 "strip_size_kb": 0, 00:29:06.282 "state": "online", 00:29:06.282 "raid_level": "raid1", 00:29:06.282 "superblock": true, 00:29:06.282 "num_base_bdevs": 2, 00:29:06.282 "num_base_bdevs_discovered": 2, 00:29:06.282 "num_base_bdevs_operational": 2, 00:29:06.282 "process": { 00:29:06.282 "type": "rebuild", 00:29:06.282 "target": "spare", 00:29:06.282 "progress": { 00:29:06.282 "blocks": 24576, 00:29:06.282 "percent": 38 00:29:06.282 } 00:29:06.282 }, 00:29:06.282 "base_bdevs_list": [ 00:29:06.282 { 00:29:06.282 "name": "spare", 00:29:06.282 "uuid": "915f0573-f9c2-5e59-a45b-c587aad07974", 00:29:06.282 "is_configured": true, 00:29:06.282 "data_offset": 2048, 00:29:06.282 "data_size": 63488 00:29:06.282 }, 00:29:06.282 { 00:29:06.282 "name": "BaseBdev2", 00:29:06.282 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:06.282 "is_configured": true, 00:29:06.282 "data_offset": 2048, 00:29:06.282 "data_size": 63488 00:29:06.282 } 00:29:06.282 ] 00:29:06.282 }' 00:29:06.282 11:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:06.282 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:06.282 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:06.540 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:06.540 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:06.799 [2024-07-13 11:40:41.295892] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.799 [2024-07-13 11:40:41.298410] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:06.799 [2024-07-13 11:40:41.298486] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:06.799 [2024-07-13 11:40:41.298505] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:06.799 [2024-07-13 11:40:41.298514] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:06.799 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.058 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:07.058 "name": "raid_bdev1", 00:29:07.058 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:07.058 "strip_size_kb": 0, 00:29:07.058 "state": "online", 00:29:07.058 "raid_level": "raid1", 00:29:07.058 "superblock": true, 00:29:07.058 "num_base_bdevs": 2, 00:29:07.058 "num_base_bdevs_discovered": 1, 00:29:07.058 "num_base_bdevs_operational": 1, 00:29:07.058 "base_bdevs_list": [ 00:29:07.058 { 00:29:07.058 "name": null, 00:29:07.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.058 "is_configured": false, 00:29:07.058 "data_offset": 2048, 00:29:07.058 "data_size": 63488 00:29:07.058 }, 00:29:07.058 { 00:29:07.058 "name": "BaseBdev2", 00:29:07.058 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:07.058 "is_configured": true, 00:29:07.058 "data_offset": 2048, 00:29:07.058 "data_size": 63488 00:29:07.058 } 00:29:07.058 ] 00:29:07.058 }' 00:29:07.058 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:07.058 11:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:07.625 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:07.625 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:07.625 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:07.625 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:07.625 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:07.625 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.625 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.883 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:07.883 "name": "raid_bdev1", 00:29:07.883 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:07.883 "strip_size_kb": 0, 00:29:07.883 "state": "online", 00:29:07.883 "raid_level": "raid1", 00:29:07.883 "superblock": true, 00:29:07.883 "num_base_bdevs": 2, 00:29:07.883 "num_base_bdevs_discovered": 1, 00:29:07.883 "num_base_bdevs_operational": 1, 00:29:07.883 "base_bdevs_list": [ 00:29:07.883 { 00:29:07.883 "name": null, 00:29:07.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.883 "is_configured": false, 00:29:07.883 "data_offset": 2048, 00:29:07.883 "data_size": 63488 00:29:07.883 }, 00:29:07.883 { 00:29:07.883 "name": "BaseBdev2", 00:29:07.883 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:07.883 "is_configured": true, 00:29:07.883 "data_offset": 2048, 00:29:07.883 "data_size": 63488 00:29:07.883 } 00:29:07.883 ] 00:29:07.883 }' 00:29:07.883 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:07.883 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:07.883 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:07.883 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:07.883 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:08.142 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:08.142 [2024-07-13 11:40:42.887670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:08.142 [2024-07-13 11:40:42.887728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:08.142 [2024-07-13 11:40:42.887768] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:08.142 [2024-07-13 11:40:42.887789] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:08.142 [2024-07-13 11:40:42.888257] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:08.142 [2024-07-13 11:40:42.888295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:08.142 [2024-07-13 11:40:42.888404] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:08.142 [2024-07-13 11:40:42.888419] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:08.142 [2024-07-13 11:40:42.888427] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:08.142 BaseBdev1 00:29:08.399 11:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:09.333 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.334 11:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.592 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:09.592 "name": "raid_bdev1", 00:29:09.592 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:09.592 "strip_size_kb": 0, 00:29:09.592 "state": "online", 00:29:09.592 "raid_level": "raid1", 00:29:09.592 "superblock": true, 00:29:09.592 "num_base_bdevs": 2, 00:29:09.592 "num_base_bdevs_discovered": 1, 00:29:09.592 "num_base_bdevs_operational": 1, 00:29:09.592 "base_bdevs_list": [ 00:29:09.592 { 00:29:09.592 "name": null, 00:29:09.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.592 "is_configured": false, 00:29:09.592 "data_offset": 2048, 00:29:09.592 "data_size": 63488 00:29:09.592 }, 00:29:09.592 { 00:29:09.592 "name": "BaseBdev2", 00:29:09.592 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:09.592 "is_configured": true, 00:29:09.592 "data_offset": 2048, 00:29:09.592 "data_size": 63488 00:29:09.592 } 00:29:09.592 ] 00:29:09.592 }' 00:29:09.592 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:09.592 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:10.159 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:10.159 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:10.159 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:10.159 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:10.159 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:10.159 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:10.159 11:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:10.417 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:10.417 "name": "raid_bdev1", 00:29:10.417 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:10.417 "strip_size_kb": 0, 00:29:10.417 "state": "online", 00:29:10.417 "raid_level": "raid1", 00:29:10.417 "superblock": true, 00:29:10.417 "num_base_bdevs": 2, 00:29:10.417 "num_base_bdevs_discovered": 1, 00:29:10.417 "num_base_bdevs_operational": 1, 00:29:10.417 "base_bdevs_list": [ 00:29:10.417 { 00:29:10.417 "name": null, 00:29:10.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:10.417 "is_configured": false, 00:29:10.417 "data_offset": 2048, 00:29:10.417 "data_size": 63488 00:29:10.417 }, 00:29:10.417 { 00:29:10.417 "name": "BaseBdev2", 00:29:10.417 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:10.417 "is_configured": true, 00:29:10.417 "data_offset": 2048, 00:29:10.417 "data_size": 63488 00:29:10.417 } 00:29:10.417 ] 00:29:10.417 }' 00:29:10.417 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:10.417 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:10.417 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.676 [2024-07-13 11:40:45.364411] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:10.676 [2024-07-13 11:40:45.364510] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:10.676 [2024-07-13 11:40:45.364524] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:10.676 request: 00:29:10.676 { 00:29:10.676 "base_bdev": "BaseBdev1", 00:29:10.676 "raid_bdev": "raid_bdev1", 00:29:10.676 "method": "bdev_raid_add_base_bdev", 00:29:10.676 "req_id": 1 00:29:10.676 } 00:29:10.676 Got JSON-RPC error response 00:29:10.676 response: 00:29:10.676 { 00:29:10.676 "code": -22, 00:29:10.676 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:10.676 } 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:10.676 11:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:12.053 "name": "raid_bdev1", 00:29:12.053 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:12.053 "strip_size_kb": 0, 00:29:12.053 "state": "online", 00:29:12.053 "raid_level": "raid1", 00:29:12.053 "superblock": true, 00:29:12.053 "num_base_bdevs": 2, 00:29:12.053 "num_base_bdevs_discovered": 1, 00:29:12.053 "num_base_bdevs_operational": 1, 00:29:12.053 "base_bdevs_list": [ 00:29:12.053 { 00:29:12.053 "name": null, 00:29:12.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.053 "is_configured": false, 00:29:12.053 "data_offset": 2048, 00:29:12.053 "data_size": 63488 00:29:12.053 }, 00:29:12.053 { 00:29:12.053 "name": "BaseBdev2", 00:29:12.053 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:12.053 "is_configured": true, 00:29:12.053 "data_offset": 2048, 00:29:12.053 "data_size": 63488 00:29:12.053 } 00:29:12.053 ] 00:29:12.053 }' 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:12.053 11:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.620 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:12.620 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.620 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:12.620 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:12.620 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.620 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.620 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.877 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:12.877 "name": "raid_bdev1", 00:29:12.877 "uuid": "1558c51e-18de-4b47-954a-f998540085a8", 00:29:12.877 "strip_size_kb": 0, 00:29:12.877 "state": "online", 00:29:12.877 "raid_level": "raid1", 00:29:12.877 "superblock": true, 00:29:12.877 "num_base_bdevs": 2, 00:29:12.877 "num_base_bdevs_discovered": 1, 00:29:12.877 "num_base_bdevs_operational": 1, 00:29:12.877 "base_bdevs_list": [ 00:29:12.877 { 00:29:12.877 "name": null, 00:29:12.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.877 "is_configured": false, 00:29:12.877 "data_offset": 2048, 00:29:12.877 "data_size": 63488 00:29:12.877 }, 00:29:12.877 { 00:29:12.877 "name": "BaseBdev2", 00:29:12.877 "uuid": "8dd022e1-2e15-5111-8629-801e05c9e015", 00:29:12.877 "is_configured": true, 00:29:12.877 "data_offset": 2048, 00:29:12.877 "data_size": 63488 00:29:12.877 } 00:29:12.877 ] 00:29:12.877 }' 00:29:12.877 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:12.877 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:12.878 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 146872 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 146872 ']' 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 146872 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 146872 00:29:13.136 killing process with pid 146872 00:29:13.136 Received shutdown signal, test time was about 25.900252 seconds 00:29:13.136 00:29:13.136 Latency(us) 00:29:13.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.136 =================================================================================================================== 00:29:13.136 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 146872' 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 146872 00:29:13.136 11:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 146872 00:29:13.136 [2024-07-13 11:40:47.661627] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:13.136 [2024-07-13 11:40:47.661757] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:13.136 [2024-07-13 11:40:47.661809] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:13.136 [2024-07-13 11:40:47.661824] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:29:13.136 [2024-07-13 11:40:47.818898] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:14.512 ************************************ 00:29:14.512 END TEST raid_rebuild_test_sb_io 00:29:14.512 ************************************ 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:14.512 00:29:14.512 real 0m31.491s 00:29:14.512 user 0m51.049s 00:29:14.512 sys 0m2.810s 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:14.512 11:40:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:14.512 11:40:48 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:29:14.512 11:40:48 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:29:14.512 11:40:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:14.512 11:40:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:14.512 11:40:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:14.512 ************************************ 00:29:14.512 START TEST raid_rebuild_test 00:29:14.512 ************************************ 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=147786 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 147786 /var/tmp/spdk-raid.sock 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 147786 ']' 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.512 11:40:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:14.512 [2024-07-13 11:40:49.003918] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:14.512 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:14.512 Zero copy mechanism will not be used. 00:29:14.512 [2024-07-13 11:40:49.004094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147786 ] 00:29:14.512 [2024-07-13 11:40:49.154381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.770 [2024-07-13 11:40:49.338375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.028 [2024-07-13 11:40:49.528979] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:15.285 11:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.285 11:40:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:29:15.285 11:40:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:15.285 11:40:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:15.542 BaseBdev1_malloc 00:29:15.543 11:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:15.800 [2024-07-13 11:40:50.413346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:15.800 [2024-07-13 11:40:50.413450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.800 [2024-07-13 11:40:50.413490] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:15.800 [2024-07-13 11:40:50.413511] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.800 [2024-07-13 11:40:50.415732] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.800 [2024-07-13 11:40:50.415778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:15.800 BaseBdev1 00:29:15.800 11:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:15.800 11:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:16.058 BaseBdev2_malloc 00:29:16.058 11:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:16.316 [2024-07-13 11:40:50.826475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:16.316 [2024-07-13 11:40:50.826569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.316 [2024-07-13 11:40:50.826605] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:29:16.316 [2024-07-13 11:40:50.826626] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.316 [2024-07-13 11:40:50.828794] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.316 [2024-07-13 11:40:50.828841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:16.316 BaseBdev2 00:29:16.316 11:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:16.316 11:40:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:16.316 BaseBdev3_malloc 00:29:16.316 11:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:16.574 [2024-07-13 11:40:51.231390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:16.574 [2024-07-13 11:40:51.231465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.574 [2024-07-13 11:40:51.231506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:29:16.574 [2024-07-13 11:40:51.231531] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.574 [2024-07-13 11:40:51.233681] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.574 [2024-07-13 11:40:51.233733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:16.574 BaseBdev3 00:29:16.574 11:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:16.574 11:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:16.831 BaseBdev4_malloc 00:29:16.831 11:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:17.089 [2024-07-13 11:40:51.615991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:17.089 [2024-07-13 11:40:51.616068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.089 [2024-07-13 11:40:51.616103] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:17.089 [2024-07-13 11:40:51.616134] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.089 [2024-07-13 11:40:51.618305] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.089 [2024-07-13 11:40:51.618356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:17.089 BaseBdev4 00:29:17.089 11:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:17.089 spare_malloc 00:29:17.347 11:40:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:17.347 spare_delay 00:29:17.347 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:17.606 [2024-07-13 11:40:52.192983] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:17.606 [2024-07-13 11:40:52.193059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.606 [2024-07-13 11:40:52.193088] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:17.606 [2024-07-13 11:40:52.193119] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.606 [2024-07-13 11:40:52.195306] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.606 [2024-07-13 11:40:52.195357] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:17.606 spare 00:29:17.606 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:17.864 [2024-07-13 11:40:52.381068] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:17.864 [2024-07-13 11:40:52.382991] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:17.864 [2024-07-13 11:40:52.383070] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:17.864 [2024-07-13 11:40:52.383128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:17.864 [2024-07-13 11:40:52.383223] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:29:17.864 [2024-07-13 11:40:52.383235] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:17.864 [2024-07-13 11:40:52.383365] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:17.864 [2024-07-13 11:40:52.383704] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:29:17.864 [2024-07-13 11:40:52.383726] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:29:17.864 [2024-07-13 11:40:52.383859] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:17.864 "name": "raid_bdev1", 00:29:17.864 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:17.864 "strip_size_kb": 0, 00:29:17.864 "state": "online", 00:29:17.864 "raid_level": "raid1", 00:29:17.864 "superblock": false, 00:29:17.864 "num_base_bdevs": 4, 00:29:17.864 "num_base_bdevs_discovered": 4, 00:29:17.864 "num_base_bdevs_operational": 4, 00:29:17.864 "base_bdevs_list": [ 00:29:17.864 { 00:29:17.864 "name": "BaseBdev1", 00:29:17.864 "uuid": "28796880-7eec-5cca-a4b6-8e45c92f39a9", 00:29:17.864 "is_configured": true, 00:29:17.864 "data_offset": 0, 00:29:17.864 "data_size": 65536 00:29:17.864 }, 00:29:17.864 { 00:29:17.864 "name": "BaseBdev2", 00:29:17.864 "uuid": "06b8f9e0-ce66-5b63-8c3c-6a19a12f8cb3", 00:29:17.864 "is_configured": true, 00:29:17.864 "data_offset": 0, 00:29:17.864 "data_size": 65536 00:29:17.864 }, 00:29:17.864 { 00:29:17.864 "name": "BaseBdev3", 00:29:17.864 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:17.864 "is_configured": true, 00:29:17.864 "data_offset": 0, 00:29:17.864 "data_size": 65536 00:29:17.864 }, 00:29:17.864 { 00:29:17.864 "name": "BaseBdev4", 00:29:17.864 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:17.864 "is_configured": true, 00:29:17.864 "data_offset": 0, 00:29:17.864 "data_size": 65536 00:29:17.864 } 00:29:17.864 ] 00:29:17.864 }' 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:17.864 11:40:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.797 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:18.797 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:18.797 [2024-07-13 11:40:53.441453] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:18.797 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:18.797 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.797 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:19.055 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:19.334 [2024-07-13 11:40:53.841334] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:19.334 /dev/nbd0 00:29:19.334 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:19.334 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:19.334 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:19.334 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:19.335 1+0 records in 00:29:19.335 1+0 records out 00:29:19.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322004 s, 12.7 MB/s 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:29:19.335 11:40:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:25.895 65536+0 records in 00:29:25.895 65536+0 records out 00:29:25.895 33554432 bytes (34 MB, 32 MiB) copied, 6.43396 s, 5.2 MB/s 00:29:25.895 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:25.895 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:25.895 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:25.895 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:25.895 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:25.895 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:25.895 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:25.895 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:25.896 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:25.896 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:25.896 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:25.896 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:25.896 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:25.896 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:25.896 [2024-07-13 11:41:00.606763] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:26.154 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:26.154 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:26.154 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:26.154 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:26.154 11:41:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:26.154 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:26.413 [2024-07-13 11:41:00.946493] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.413 11:41:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.413 11:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:26.413 "name": "raid_bdev1", 00:29:26.413 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:26.413 "strip_size_kb": 0, 00:29:26.413 "state": "online", 00:29:26.413 "raid_level": "raid1", 00:29:26.413 "superblock": false, 00:29:26.413 "num_base_bdevs": 4, 00:29:26.413 "num_base_bdevs_discovered": 3, 00:29:26.413 "num_base_bdevs_operational": 3, 00:29:26.413 "base_bdevs_list": [ 00:29:26.413 { 00:29:26.413 "name": null, 00:29:26.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.413 "is_configured": false, 00:29:26.413 "data_offset": 0, 00:29:26.413 "data_size": 65536 00:29:26.413 }, 00:29:26.413 { 00:29:26.413 "name": "BaseBdev2", 00:29:26.413 "uuid": "06b8f9e0-ce66-5b63-8c3c-6a19a12f8cb3", 00:29:26.413 "is_configured": true, 00:29:26.413 "data_offset": 0, 00:29:26.413 "data_size": 65536 00:29:26.413 }, 00:29:26.413 { 00:29:26.413 "name": "BaseBdev3", 00:29:26.413 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:26.413 "is_configured": true, 00:29:26.413 "data_offset": 0, 00:29:26.413 "data_size": 65536 00:29:26.413 }, 00:29:26.413 { 00:29:26.413 "name": "BaseBdev4", 00:29:26.413 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:26.413 "is_configured": true, 00:29:26.413 "data_offset": 0, 00:29:26.413 "data_size": 65536 00:29:26.413 } 00:29:26.413 ] 00:29:26.413 }' 00:29:26.413 11:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:26.413 11:41:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.349 11:41:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:27.349 [2024-07-13 11:41:01.996594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:27.349 [2024-07-13 11:41:02.007200] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0bc50 00:29:27.349 [2024-07-13 11:41:02.009195] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:27.349 11:41:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:28.283 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:28.283 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:28.283 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:28.283 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:28.283 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:28.283 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.283 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.539 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:28.539 "name": "raid_bdev1", 00:29:28.539 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:28.539 "strip_size_kb": 0, 00:29:28.539 "state": "online", 00:29:28.539 "raid_level": "raid1", 00:29:28.539 "superblock": false, 00:29:28.539 "num_base_bdevs": 4, 00:29:28.539 "num_base_bdevs_discovered": 4, 00:29:28.539 "num_base_bdevs_operational": 4, 00:29:28.539 "process": { 00:29:28.539 "type": "rebuild", 00:29:28.539 "target": "spare", 00:29:28.539 "progress": { 00:29:28.539 "blocks": 24576, 00:29:28.539 "percent": 37 00:29:28.539 } 00:29:28.539 }, 00:29:28.539 "base_bdevs_list": [ 00:29:28.539 { 00:29:28.539 "name": "spare", 00:29:28.539 "uuid": "1dc2e687-d9a3-5142-b0e6-f8383ed996b6", 00:29:28.539 "is_configured": true, 00:29:28.539 "data_offset": 0, 00:29:28.539 "data_size": 65536 00:29:28.539 }, 00:29:28.539 { 00:29:28.539 "name": "BaseBdev2", 00:29:28.539 "uuid": "06b8f9e0-ce66-5b63-8c3c-6a19a12f8cb3", 00:29:28.539 "is_configured": true, 00:29:28.539 "data_offset": 0, 00:29:28.539 "data_size": 65536 00:29:28.539 }, 00:29:28.539 { 00:29:28.539 "name": "BaseBdev3", 00:29:28.539 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:28.539 "is_configured": true, 00:29:28.539 "data_offset": 0, 00:29:28.539 "data_size": 65536 00:29:28.539 }, 00:29:28.539 { 00:29:28.539 "name": "BaseBdev4", 00:29:28.539 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:28.539 "is_configured": true, 00:29:28.539 "data_offset": 0, 00:29:28.540 "data_size": 65536 00:29:28.540 } 00:29:28.540 ] 00:29:28.540 }' 00:29:28.540 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:28.797 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:28.797 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.797 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:28.797 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:29.055 [2024-07-13 11:41:03.623245] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:29.055 [2024-07-13 11:41:03.719774] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:29.055 [2024-07-13 11:41:03.719859] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:29.055 [2024-07-13 11:41:03.719881] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:29.055 [2024-07-13 11:41:03.719889] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.055 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:29.312 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:29.312 "name": "raid_bdev1", 00:29:29.312 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:29.312 "strip_size_kb": 0, 00:29:29.312 "state": "online", 00:29:29.312 "raid_level": "raid1", 00:29:29.312 "superblock": false, 00:29:29.312 "num_base_bdevs": 4, 00:29:29.312 "num_base_bdevs_discovered": 3, 00:29:29.312 "num_base_bdevs_operational": 3, 00:29:29.312 "base_bdevs_list": [ 00:29:29.312 { 00:29:29.312 "name": null, 00:29:29.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.312 "is_configured": false, 00:29:29.312 "data_offset": 0, 00:29:29.312 "data_size": 65536 00:29:29.312 }, 00:29:29.312 { 00:29:29.312 "name": "BaseBdev2", 00:29:29.312 "uuid": "06b8f9e0-ce66-5b63-8c3c-6a19a12f8cb3", 00:29:29.312 "is_configured": true, 00:29:29.312 "data_offset": 0, 00:29:29.312 "data_size": 65536 00:29:29.312 }, 00:29:29.312 { 00:29:29.312 "name": "BaseBdev3", 00:29:29.312 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:29.312 "is_configured": true, 00:29:29.312 "data_offset": 0, 00:29:29.312 "data_size": 65536 00:29:29.312 }, 00:29:29.312 { 00:29:29.312 "name": "BaseBdev4", 00:29:29.312 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:29.313 "is_configured": true, 00:29:29.313 "data_offset": 0, 00:29:29.313 "data_size": 65536 00:29:29.313 } 00:29:29.313 ] 00:29:29.313 }' 00:29:29.313 11:41:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:29.313 11:41:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:29.878 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:29.878 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:29.878 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:29.878 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:29.878 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:29.878 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.878 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:30.137 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:30.137 "name": "raid_bdev1", 00:29:30.137 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:30.137 "strip_size_kb": 0, 00:29:30.137 "state": "online", 00:29:30.137 "raid_level": "raid1", 00:29:30.137 "superblock": false, 00:29:30.137 "num_base_bdevs": 4, 00:29:30.137 "num_base_bdevs_discovered": 3, 00:29:30.137 "num_base_bdevs_operational": 3, 00:29:30.137 "base_bdevs_list": [ 00:29:30.137 { 00:29:30.137 "name": null, 00:29:30.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:30.137 "is_configured": false, 00:29:30.137 "data_offset": 0, 00:29:30.137 "data_size": 65536 00:29:30.137 }, 00:29:30.137 { 00:29:30.137 "name": "BaseBdev2", 00:29:30.137 "uuid": "06b8f9e0-ce66-5b63-8c3c-6a19a12f8cb3", 00:29:30.137 "is_configured": true, 00:29:30.137 "data_offset": 0, 00:29:30.137 "data_size": 65536 00:29:30.137 }, 00:29:30.137 { 00:29:30.137 "name": "BaseBdev3", 00:29:30.137 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:30.137 "is_configured": true, 00:29:30.137 "data_offset": 0, 00:29:30.137 "data_size": 65536 00:29:30.137 }, 00:29:30.137 { 00:29:30.137 "name": "BaseBdev4", 00:29:30.137 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:30.137 "is_configured": true, 00:29:30.137 "data_offset": 0, 00:29:30.137 "data_size": 65536 00:29:30.137 } 00:29:30.137 ] 00:29:30.137 }' 00:29:30.137 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:30.137 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:30.137 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:30.395 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:30.396 11:41:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:30.396 [2024-07-13 11:41:05.129545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:30.396 [2024-07-13 11:41:05.138832] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0bdf0 00:29:30.396 [2024-07-13 11:41:05.140837] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:30.654 11:41:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:31.590 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:31.591 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:31.591 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:31.591 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:31.591 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:31.591 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.591 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:31.850 "name": "raid_bdev1", 00:29:31.850 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:31.850 "strip_size_kb": 0, 00:29:31.850 "state": "online", 00:29:31.850 "raid_level": "raid1", 00:29:31.850 "superblock": false, 00:29:31.850 "num_base_bdevs": 4, 00:29:31.850 "num_base_bdevs_discovered": 4, 00:29:31.850 "num_base_bdevs_operational": 4, 00:29:31.850 "process": { 00:29:31.850 "type": "rebuild", 00:29:31.850 "target": "spare", 00:29:31.850 "progress": { 00:29:31.850 "blocks": 24576, 00:29:31.850 "percent": 37 00:29:31.850 } 00:29:31.850 }, 00:29:31.850 "base_bdevs_list": [ 00:29:31.850 { 00:29:31.850 "name": "spare", 00:29:31.850 "uuid": "1dc2e687-d9a3-5142-b0e6-f8383ed996b6", 00:29:31.850 "is_configured": true, 00:29:31.850 "data_offset": 0, 00:29:31.850 "data_size": 65536 00:29:31.850 }, 00:29:31.850 { 00:29:31.850 "name": "BaseBdev2", 00:29:31.850 "uuid": "06b8f9e0-ce66-5b63-8c3c-6a19a12f8cb3", 00:29:31.850 "is_configured": true, 00:29:31.850 "data_offset": 0, 00:29:31.850 "data_size": 65536 00:29:31.850 }, 00:29:31.850 { 00:29:31.850 "name": "BaseBdev3", 00:29:31.850 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:31.850 "is_configured": true, 00:29:31.850 "data_offset": 0, 00:29:31.850 "data_size": 65536 00:29:31.850 }, 00:29:31.850 { 00:29:31.850 "name": "BaseBdev4", 00:29:31.850 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:31.850 "is_configured": true, 00:29:31.850 "data_offset": 0, 00:29:31.850 "data_size": 65536 00:29:31.850 } 00:29:31.850 ] 00:29:31.850 }' 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:31.850 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:32.107 [2024-07-13 11:41:06.719422] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:32.107 [2024-07-13 11:41:06.749439] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0bdf0 00:29:32.107 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:32.107 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:32.107 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:32.107 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:32.107 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:32.107 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:32.107 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:32.107 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.107 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.364 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:32.364 "name": "raid_bdev1", 00:29:32.364 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:32.364 "strip_size_kb": 0, 00:29:32.364 "state": "online", 00:29:32.364 "raid_level": "raid1", 00:29:32.364 "superblock": false, 00:29:32.364 "num_base_bdevs": 4, 00:29:32.364 "num_base_bdevs_discovered": 3, 00:29:32.364 "num_base_bdevs_operational": 3, 00:29:32.364 "process": { 00:29:32.364 "type": "rebuild", 00:29:32.364 "target": "spare", 00:29:32.364 "progress": { 00:29:32.364 "blocks": 36864, 00:29:32.364 "percent": 56 00:29:32.364 } 00:29:32.364 }, 00:29:32.364 "base_bdevs_list": [ 00:29:32.364 { 00:29:32.364 "name": "spare", 00:29:32.364 "uuid": "1dc2e687-d9a3-5142-b0e6-f8383ed996b6", 00:29:32.364 "is_configured": true, 00:29:32.364 "data_offset": 0, 00:29:32.364 "data_size": 65536 00:29:32.364 }, 00:29:32.364 { 00:29:32.364 "name": null, 00:29:32.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.364 "is_configured": false, 00:29:32.364 "data_offset": 0, 00:29:32.364 "data_size": 65536 00:29:32.364 }, 00:29:32.364 { 00:29:32.364 "name": "BaseBdev3", 00:29:32.364 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:32.364 "is_configured": true, 00:29:32.364 "data_offset": 0, 00:29:32.364 "data_size": 65536 00:29:32.364 }, 00:29:32.364 { 00:29:32.364 "name": "BaseBdev4", 00:29:32.364 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:32.364 "is_configured": true, 00:29:32.364 "data_offset": 0, 00:29:32.364 "data_size": 65536 00:29:32.364 } 00:29:32.364 ] 00:29:32.364 }' 00:29:32.364 11:41:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:32.364 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=918 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.365 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.928 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:32.928 "name": "raid_bdev1", 00:29:32.928 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:32.928 "strip_size_kb": 0, 00:29:32.928 "state": "online", 00:29:32.928 "raid_level": "raid1", 00:29:32.928 "superblock": false, 00:29:32.928 "num_base_bdevs": 4, 00:29:32.928 "num_base_bdevs_discovered": 3, 00:29:32.928 "num_base_bdevs_operational": 3, 00:29:32.928 "process": { 00:29:32.928 "type": "rebuild", 00:29:32.928 "target": "spare", 00:29:32.928 "progress": { 00:29:32.928 "blocks": 45056, 00:29:32.928 "percent": 68 00:29:32.928 } 00:29:32.928 }, 00:29:32.928 "base_bdevs_list": [ 00:29:32.928 { 00:29:32.928 "name": "spare", 00:29:32.928 "uuid": "1dc2e687-d9a3-5142-b0e6-f8383ed996b6", 00:29:32.928 "is_configured": true, 00:29:32.928 "data_offset": 0, 00:29:32.928 "data_size": 65536 00:29:32.928 }, 00:29:32.928 { 00:29:32.928 "name": null, 00:29:32.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.928 "is_configured": false, 00:29:32.928 "data_offset": 0, 00:29:32.928 "data_size": 65536 00:29:32.928 }, 00:29:32.928 { 00:29:32.928 "name": "BaseBdev3", 00:29:32.928 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:32.928 "is_configured": true, 00:29:32.928 "data_offset": 0, 00:29:32.928 "data_size": 65536 00:29:32.928 }, 00:29:32.928 { 00:29:32.928 "name": "BaseBdev4", 00:29:32.928 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:32.928 "is_configured": true, 00:29:32.928 "data_offset": 0, 00:29:32.928 "data_size": 65536 00:29:32.928 } 00:29:32.928 ] 00:29:32.928 }' 00:29:32.928 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:32.928 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:32.928 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:32.928 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:32.928 11:41:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:33.862 [2024-07-13 11:41:08.358606] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:33.862 [2024-07-13 11:41:08.358682] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:33.862 [2024-07-13 11:41:08.358753] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:33.862 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:33.862 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:33.862 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:33.862 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:33.862 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:33.862 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:33.862 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:33.862 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.121 "name": "raid_bdev1", 00:29:34.121 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:34.121 "strip_size_kb": 0, 00:29:34.121 "state": "online", 00:29:34.121 "raid_level": "raid1", 00:29:34.121 "superblock": false, 00:29:34.121 "num_base_bdevs": 4, 00:29:34.121 "num_base_bdevs_discovered": 3, 00:29:34.121 "num_base_bdevs_operational": 3, 00:29:34.121 "base_bdevs_list": [ 00:29:34.121 { 00:29:34.121 "name": "spare", 00:29:34.121 "uuid": "1dc2e687-d9a3-5142-b0e6-f8383ed996b6", 00:29:34.121 "is_configured": true, 00:29:34.121 "data_offset": 0, 00:29:34.121 "data_size": 65536 00:29:34.121 }, 00:29:34.121 { 00:29:34.121 "name": null, 00:29:34.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:34.121 "is_configured": false, 00:29:34.121 "data_offset": 0, 00:29:34.121 "data_size": 65536 00:29:34.121 }, 00:29:34.121 { 00:29:34.121 "name": "BaseBdev3", 00:29:34.121 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:34.121 "is_configured": true, 00:29:34.121 "data_offset": 0, 00:29:34.121 "data_size": 65536 00:29:34.121 }, 00:29:34.121 { 00:29:34.121 "name": "BaseBdev4", 00:29:34.121 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:34.121 "is_configured": true, 00:29:34.121 "data_offset": 0, 00:29:34.121 "data_size": 65536 00:29:34.121 } 00:29:34.121 ] 00:29:34.121 }' 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.121 11:41:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.379 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:34.379 "name": "raid_bdev1", 00:29:34.379 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:34.379 "strip_size_kb": 0, 00:29:34.379 "state": "online", 00:29:34.379 "raid_level": "raid1", 00:29:34.379 "superblock": false, 00:29:34.379 "num_base_bdevs": 4, 00:29:34.379 "num_base_bdevs_discovered": 3, 00:29:34.379 "num_base_bdevs_operational": 3, 00:29:34.379 "base_bdevs_list": [ 00:29:34.379 { 00:29:34.379 "name": "spare", 00:29:34.379 "uuid": "1dc2e687-d9a3-5142-b0e6-f8383ed996b6", 00:29:34.379 "is_configured": true, 00:29:34.379 "data_offset": 0, 00:29:34.379 "data_size": 65536 00:29:34.379 }, 00:29:34.379 { 00:29:34.379 "name": null, 00:29:34.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:34.379 "is_configured": false, 00:29:34.379 "data_offset": 0, 00:29:34.379 "data_size": 65536 00:29:34.379 }, 00:29:34.379 { 00:29:34.379 "name": "BaseBdev3", 00:29:34.379 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:34.379 "is_configured": true, 00:29:34.379 "data_offset": 0, 00:29:34.379 "data_size": 65536 00:29:34.379 }, 00:29:34.379 { 00:29:34.379 "name": "BaseBdev4", 00:29:34.379 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:34.380 "is_configured": true, 00:29:34.380 "data_offset": 0, 00:29:34.380 "data_size": 65536 00:29:34.380 } 00:29:34.380 ] 00:29:34.380 }' 00:29:34.380 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:34.380 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:34.380 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:34.638 "name": "raid_bdev1", 00:29:34.638 "uuid": "44b267a9-2586-4b96-89ce-9bbf125298bb", 00:29:34.638 "strip_size_kb": 0, 00:29:34.638 "state": "online", 00:29:34.638 "raid_level": "raid1", 00:29:34.638 "superblock": false, 00:29:34.638 "num_base_bdevs": 4, 00:29:34.638 "num_base_bdevs_discovered": 3, 00:29:34.638 "num_base_bdevs_operational": 3, 00:29:34.638 "base_bdevs_list": [ 00:29:34.638 { 00:29:34.638 "name": "spare", 00:29:34.638 "uuid": "1dc2e687-d9a3-5142-b0e6-f8383ed996b6", 00:29:34.638 "is_configured": true, 00:29:34.638 "data_offset": 0, 00:29:34.638 "data_size": 65536 00:29:34.638 }, 00:29:34.638 { 00:29:34.638 "name": null, 00:29:34.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:34.638 "is_configured": false, 00:29:34.638 "data_offset": 0, 00:29:34.638 "data_size": 65536 00:29:34.638 }, 00:29:34.638 { 00:29:34.638 "name": "BaseBdev3", 00:29:34.638 "uuid": "d9bca3f7-d55d-59d3-8970-2daf71ee9075", 00:29:34.638 "is_configured": true, 00:29:34.638 "data_offset": 0, 00:29:34.638 "data_size": 65536 00:29:34.638 }, 00:29:34.638 { 00:29:34.638 "name": "BaseBdev4", 00:29:34.638 "uuid": "b794d55a-47ee-5dda-a9ad-c201e6f22535", 00:29:34.638 "is_configured": true, 00:29:34.638 "data_offset": 0, 00:29:34.638 "data_size": 65536 00:29:34.638 } 00:29:34.638 ] 00:29:34.638 }' 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:34.638 11:41:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:35.573 11:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:35.832 [2024-07-13 11:41:10.329008] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:35.832 [2024-07-13 11:41:10.329036] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:35.832 [2024-07-13 11:41:10.329106] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:35.832 [2024-07-13 11:41:10.329197] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:35.832 [2024-07-13 11:41:10.329210] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:29:35.832 11:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:35.832 11:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:36.090 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:36.349 /dev/nbd0 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:36.349 1+0 records in 00:29:36.349 1+0 records out 00:29:36.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217004 s, 18.9 MB/s 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:36.349 11:41:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:36.619 /dev/nbd1 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:36.619 1+0 records in 00:29:36.619 1+0 records out 00:29:36.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236928 s, 17.3 MB/s 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:36.619 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:36.907 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:36.907 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:36.907 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:36.907 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:36.907 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.907 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:36.907 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:37.169 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:37.169 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:37.169 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:37.169 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:37.169 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:37.169 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:37.169 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:37.428 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:37.428 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:37.428 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:37.428 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:37.428 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:37.428 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:37.428 11:41:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 147786 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 147786 ']' 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 147786 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147786 00:29:37.428 killing process with pid 147786 00:29:37.428 Received shutdown signal, test time was about 60.000000 seconds 00:29:37.428 00:29:37.428 Latency(us) 00:29:37.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.428 =================================================================================================================== 00:29:37.428 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147786' 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 147786 00:29:37.428 11:41:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 147786 00:29:37.428 [2024-07-13 11:41:12.066881] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:37.686 [2024-07-13 11:41:12.397397] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:39.064 ************************************ 00:29:39.064 END TEST raid_rebuild_test 00:29:39.064 ************************************ 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:29:39.064 00:29:39.064 real 0m24.472s 00:29:39.064 user 0m34.079s 00:29:39.064 sys 0m3.892s 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.064 11:41:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:39.064 11:41:13 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:29:39.064 11:41:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:39.064 11:41:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.064 11:41:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:39.064 ************************************ 00:29:39.064 START TEST raid_rebuild_test_sb 00:29:39.064 ************************************ 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=148407 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 148407 /var/tmp/spdk-raid.sock 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 148407 ']' 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:39.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:39.064 11:41:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:39.064 [2024-07-13 11:41:13.565035] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:29:39.064 [2024-07-13 11:41:13.566050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148407 ] 00:29:39.064 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:39.064 Zero copy mechanism will not be used. 00:29:39.064 [2024-07-13 11:41:13.725700] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.323 [2024-07-13 11:41:13.909919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.582 [2024-07-13 11:41:14.096982] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:39.840 11:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:39.840 11:41:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:29:39.840 11:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:39.840 11:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:40.098 BaseBdev1_malloc 00:29:40.098 11:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:40.357 [2024-07-13 11:41:14.963543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:40.357 [2024-07-13 11:41:14.963903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.357 [2024-07-13 11:41:14.963975] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:29:40.357 [2024-07-13 11:41:14.964248] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.357 [2024-07-13 11:41:14.966880] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.357 [2024-07-13 11:41:14.967066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:40.357 BaseBdev1 00:29:40.357 11:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:40.357 11:41:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:40.615 BaseBdev2_malloc 00:29:40.615 11:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:40.873 [2024-07-13 11:41:15.438706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:40.873 [2024-07-13 11:41:15.438953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.873 [2024-07-13 11:41:15.439024] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:29:40.873 [2024-07-13 11:41:15.439142] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.874 [2024-07-13 11:41:15.441539] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.874 [2024-07-13 11:41:15.441703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:40.874 BaseBdev2 00:29:40.874 11:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:40.874 11:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:41.132 BaseBdev3_malloc 00:29:41.132 11:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:41.132 [2024-07-13 11:41:15.839729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:41.132 [2024-07-13 11:41:15.839947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.132 [2024-07-13 11:41:15.840014] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:29:41.132 [2024-07-13 11:41:15.840148] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.132 [2024-07-13 11:41:15.842379] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.132 [2024-07-13 11:41:15.842549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:41.132 BaseBdev3 00:29:41.132 11:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:41.132 11:41:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:41.390 BaseBdev4_malloc 00:29:41.390 11:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:41.649 [2024-07-13 11:41:16.256426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:41.649 [2024-07-13 11:41:16.256649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.649 [2024-07-13 11:41:16.256732] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:41.649 [2024-07-13 11:41:16.256857] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.649 [2024-07-13 11:41:16.259100] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.649 [2024-07-13 11:41:16.259270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:41.649 BaseBdev4 00:29:41.649 11:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:41.907 spare_malloc 00:29:41.907 11:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:42.165 spare_delay 00:29:42.165 11:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:42.165 [2024-07-13 11:41:16.913241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:42.165 [2024-07-13 11:41:16.913542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:42.165 [2024-07-13 11:41:16.913607] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:42.165 [2024-07-13 11:41:16.913927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:42.165 [2024-07-13 11:41:16.916416] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:42.165 [2024-07-13 11:41:16.916609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:42.424 spare 00:29:42.424 11:41:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:42.424 [2024-07-13 11:41:17.121368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:42.424 [2024-07-13 11:41:17.123405] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:42.424 [2024-07-13 11:41:17.123602] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:42.424 [2024-07-13 11:41:17.123701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:42.424 [2024-07-13 11:41:17.124023] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:29:42.424 [2024-07-13 11:41:17.124164] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:42.424 [2024-07-13 11:41:17.124316] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:29:42.424 [2024-07-13 11:41:17.124795] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:29:42.424 [2024-07-13 11:41:17.124912] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:29:42.424 [2024-07-13 11:41:17.125135] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.424 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.683 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:42.683 "name": "raid_bdev1", 00:29:42.683 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:42.683 "strip_size_kb": 0, 00:29:42.683 "state": "online", 00:29:42.683 "raid_level": "raid1", 00:29:42.683 "superblock": true, 00:29:42.683 "num_base_bdevs": 4, 00:29:42.683 "num_base_bdevs_discovered": 4, 00:29:42.683 "num_base_bdevs_operational": 4, 00:29:42.683 "base_bdevs_list": [ 00:29:42.683 { 00:29:42.683 "name": "BaseBdev1", 00:29:42.683 "uuid": "4b50c796-536a-5f0d-a1d1-e8fcc3c2ae99", 00:29:42.683 "is_configured": true, 00:29:42.683 "data_offset": 2048, 00:29:42.683 "data_size": 63488 00:29:42.683 }, 00:29:42.683 { 00:29:42.683 "name": "BaseBdev2", 00:29:42.683 "uuid": "f9f0019d-8dea-5936-98e2-43b09be66b86", 00:29:42.683 "is_configured": true, 00:29:42.683 "data_offset": 2048, 00:29:42.683 "data_size": 63488 00:29:42.683 }, 00:29:42.683 { 00:29:42.683 "name": "BaseBdev3", 00:29:42.683 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:42.683 "is_configured": true, 00:29:42.683 "data_offset": 2048, 00:29:42.683 "data_size": 63488 00:29:42.683 }, 00:29:42.683 { 00:29:42.683 "name": "BaseBdev4", 00:29:42.683 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:42.683 "is_configured": true, 00:29:42.683 "data_offset": 2048, 00:29:42.683 "data_size": 63488 00:29:42.683 } 00:29:42.683 ] 00:29:42.683 }' 00:29:42.683 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:42.683 11:41:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:43.249 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:43.249 11:41:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:43.507 [2024-07-13 11:41:18.161817] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:43.507 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:29:43.507 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.507 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:43.765 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:44.023 [2024-07-13 11:41:18.613701] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:44.023 /dev/nbd0 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:44.023 1+0 records in 00:29:44.023 1+0 records out 00:29:44.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004943 s, 8.3 MB/s 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:29:44.023 11:41:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:29:50.584 63488+0 records in 00:29:50.584 63488+0 records out 00:29:50.584 32505856 bytes (33 MB, 31 MiB) copied, 6.23846 s, 5.2 MB/s 00:29:50.584 11:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:50.584 11:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.584 11:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:50.584 11:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.584 11:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:50.584 11:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.584 11:41:24 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:50.584 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:50.584 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:50.584 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:50.584 [2024-07-13 11:41:25.182733] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.584 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.584 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.584 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:50.584 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:50.584 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.584 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:50.842 [2024-07-13 11:41:25.366416] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:50.842 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.101 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:51.101 "name": "raid_bdev1", 00:29:51.101 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:51.101 "strip_size_kb": 0, 00:29:51.101 "state": "online", 00:29:51.101 "raid_level": "raid1", 00:29:51.101 "superblock": true, 00:29:51.101 "num_base_bdevs": 4, 00:29:51.101 "num_base_bdevs_discovered": 3, 00:29:51.101 "num_base_bdevs_operational": 3, 00:29:51.101 "base_bdevs_list": [ 00:29:51.101 { 00:29:51.101 "name": null, 00:29:51.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.101 "is_configured": false, 00:29:51.101 "data_offset": 2048, 00:29:51.101 "data_size": 63488 00:29:51.101 }, 00:29:51.101 { 00:29:51.101 "name": "BaseBdev2", 00:29:51.101 "uuid": "f9f0019d-8dea-5936-98e2-43b09be66b86", 00:29:51.101 "is_configured": true, 00:29:51.101 "data_offset": 2048, 00:29:51.101 "data_size": 63488 00:29:51.101 }, 00:29:51.101 { 00:29:51.101 "name": "BaseBdev3", 00:29:51.101 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:51.101 "is_configured": true, 00:29:51.101 "data_offset": 2048, 00:29:51.101 "data_size": 63488 00:29:51.101 }, 00:29:51.101 { 00:29:51.101 "name": "BaseBdev4", 00:29:51.101 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:51.101 "is_configured": true, 00:29:51.101 "data_offset": 2048, 00:29:51.101 "data_size": 63488 00:29:51.101 } 00:29:51.101 ] 00:29:51.101 }' 00:29:51.101 11:41:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:51.101 11:41:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.668 11:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:51.927 [2024-07-13 11:41:26.610651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:51.927 [2024-07-13 11:41:26.621385] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca53e0 00:29:51.927 [2024-07-13 11:41:26.623460] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:51.927 11:41:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:53.302 "name": "raid_bdev1", 00:29:53.302 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:53.302 "strip_size_kb": 0, 00:29:53.302 "state": "online", 00:29:53.302 "raid_level": "raid1", 00:29:53.302 "superblock": true, 00:29:53.302 "num_base_bdevs": 4, 00:29:53.302 "num_base_bdevs_discovered": 4, 00:29:53.302 "num_base_bdevs_operational": 4, 00:29:53.302 "process": { 00:29:53.302 "type": "rebuild", 00:29:53.302 "target": "spare", 00:29:53.302 "progress": { 00:29:53.302 "blocks": 24576, 00:29:53.302 "percent": 38 00:29:53.302 } 00:29:53.302 }, 00:29:53.302 "base_bdevs_list": [ 00:29:53.302 { 00:29:53.302 "name": "spare", 00:29:53.302 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:29:53.302 "is_configured": true, 00:29:53.302 "data_offset": 2048, 00:29:53.302 "data_size": 63488 00:29:53.302 }, 00:29:53.302 { 00:29:53.302 "name": "BaseBdev2", 00:29:53.302 "uuid": "f9f0019d-8dea-5936-98e2-43b09be66b86", 00:29:53.302 "is_configured": true, 00:29:53.302 "data_offset": 2048, 00:29:53.302 "data_size": 63488 00:29:53.302 }, 00:29:53.302 { 00:29:53.302 "name": "BaseBdev3", 00:29:53.302 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:53.302 "is_configured": true, 00:29:53.302 "data_offset": 2048, 00:29:53.302 "data_size": 63488 00:29:53.302 }, 00:29:53.302 { 00:29:53.302 "name": "BaseBdev4", 00:29:53.302 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:53.302 "is_configured": true, 00:29:53.302 "data_offset": 2048, 00:29:53.302 "data_size": 63488 00:29:53.302 } 00:29:53.302 ] 00:29:53.302 }' 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:53.302 11:41:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:53.560 [2024-07-13 11:41:28.210256] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:53.560 [2024-07-13 11:41:28.233684] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:53.560 [2024-07-13 11:41:28.233890] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:53.560 [2024-07-13 11:41:28.233943] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:53.560 [2024-07-13 11:41:28.234073] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.560 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.561 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.818 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:53.818 "name": "raid_bdev1", 00:29:53.818 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:53.818 "strip_size_kb": 0, 00:29:53.818 "state": "online", 00:29:53.818 "raid_level": "raid1", 00:29:53.818 "superblock": true, 00:29:53.818 "num_base_bdevs": 4, 00:29:53.818 "num_base_bdevs_discovered": 3, 00:29:53.818 "num_base_bdevs_operational": 3, 00:29:53.818 "base_bdevs_list": [ 00:29:53.818 { 00:29:53.818 "name": null, 00:29:53.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.818 "is_configured": false, 00:29:53.818 "data_offset": 2048, 00:29:53.818 "data_size": 63488 00:29:53.818 }, 00:29:53.818 { 00:29:53.818 "name": "BaseBdev2", 00:29:53.818 "uuid": "f9f0019d-8dea-5936-98e2-43b09be66b86", 00:29:53.819 "is_configured": true, 00:29:53.819 "data_offset": 2048, 00:29:53.819 "data_size": 63488 00:29:53.819 }, 00:29:53.819 { 00:29:53.819 "name": "BaseBdev3", 00:29:53.819 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:53.819 "is_configured": true, 00:29:53.819 "data_offset": 2048, 00:29:53.819 "data_size": 63488 00:29:53.819 }, 00:29:53.819 { 00:29:53.819 "name": "BaseBdev4", 00:29:53.819 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:53.819 "is_configured": true, 00:29:53.819 "data_offset": 2048, 00:29:53.819 "data_size": 63488 00:29:53.819 } 00:29:53.819 ] 00:29:53.819 }' 00:29:53.819 11:41:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:53.819 11:41:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:54.385 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:54.385 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:54.385 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:54.385 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:54.385 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:54.385 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.385 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.642 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:54.642 "name": "raid_bdev1", 00:29:54.642 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:54.642 "strip_size_kb": 0, 00:29:54.642 "state": "online", 00:29:54.642 "raid_level": "raid1", 00:29:54.642 "superblock": true, 00:29:54.642 "num_base_bdevs": 4, 00:29:54.643 "num_base_bdevs_discovered": 3, 00:29:54.643 "num_base_bdevs_operational": 3, 00:29:54.643 "base_bdevs_list": [ 00:29:54.643 { 00:29:54.643 "name": null, 00:29:54.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.643 "is_configured": false, 00:29:54.643 "data_offset": 2048, 00:29:54.643 "data_size": 63488 00:29:54.643 }, 00:29:54.643 { 00:29:54.643 "name": "BaseBdev2", 00:29:54.643 "uuid": "f9f0019d-8dea-5936-98e2-43b09be66b86", 00:29:54.643 "is_configured": true, 00:29:54.643 "data_offset": 2048, 00:29:54.643 "data_size": 63488 00:29:54.643 }, 00:29:54.643 { 00:29:54.643 "name": "BaseBdev3", 00:29:54.643 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:54.643 "is_configured": true, 00:29:54.643 "data_offset": 2048, 00:29:54.643 "data_size": 63488 00:29:54.643 }, 00:29:54.643 { 00:29:54.643 "name": "BaseBdev4", 00:29:54.643 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:54.643 "is_configured": true, 00:29:54.643 "data_offset": 2048, 00:29:54.643 "data_size": 63488 00:29:54.643 } 00:29:54.643 ] 00:29:54.643 }' 00:29:54.643 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:54.900 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:54.900 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:54.900 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:54.900 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:55.158 [2024-07-13 11:41:29.691863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:55.158 [2024-07-13 11:41:29.701296] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5580 00:29:55.158 [2024-07-13 11:41:29.703409] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:55.158 11:41:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:56.091 11:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:56.092 11:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:56.092 11:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:56.092 11:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:56.092 11:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:56.092 11:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.092 11:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.349 11:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:56.349 "name": "raid_bdev1", 00:29:56.349 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:56.349 "strip_size_kb": 0, 00:29:56.349 "state": "online", 00:29:56.349 "raid_level": "raid1", 00:29:56.349 "superblock": true, 00:29:56.349 "num_base_bdevs": 4, 00:29:56.349 "num_base_bdevs_discovered": 4, 00:29:56.349 "num_base_bdevs_operational": 4, 00:29:56.349 "process": { 00:29:56.349 "type": "rebuild", 00:29:56.349 "target": "spare", 00:29:56.349 "progress": { 00:29:56.349 "blocks": 24576, 00:29:56.349 "percent": 38 00:29:56.349 } 00:29:56.349 }, 00:29:56.349 "base_bdevs_list": [ 00:29:56.349 { 00:29:56.349 "name": "spare", 00:29:56.350 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:29:56.350 "is_configured": true, 00:29:56.350 "data_offset": 2048, 00:29:56.350 "data_size": 63488 00:29:56.350 }, 00:29:56.350 { 00:29:56.350 "name": "BaseBdev2", 00:29:56.350 "uuid": "f9f0019d-8dea-5936-98e2-43b09be66b86", 00:29:56.350 "is_configured": true, 00:29:56.350 "data_offset": 2048, 00:29:56.350 "data_size": 63488 00:29:56.350 }, 00:29:56.350 { 00:29:56.350 "name": "BaseBdev3", 00:29:56.350 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:56.350 "is_configured": true, 00:29:56.350 "data_offset": 2048, 00:29:56.350 "data_size": 63488 00:29:56.350 }, 00:29:56.350 { 00:29:56.350 "name": "BaseBdev4", 00:29:56.350 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:56.350 "is_configured": true, 00:29:56.350 "data_offset": 2048, 00:29:56.350 "data_size": 63488 00:29:56.350 } 00:29:56.350 ] 00:29:56.350 }' 00:29:56.350 11:41:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:56.350 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:56.350 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:56.608 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:56.608 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:56.608 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:56.608 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:56.608 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:56.608 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:56.608 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:56.608 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:56.608 [2024-07-13 11:41:31.349769] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:56.865 [2024-07-13 11:41:31.512852] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca5580 00:29:56.865 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:56.865 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:56.865 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:56.865 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:56.865 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:56.865 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:56.865 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:56.865 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.865 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.125 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:57.125 "name": "raid_bdev1", 00:29:57.125 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:57.125 "strip_size_kb": 0, 00:29:57.125 "state": "online", 00:29:57.125 "raid_level": "raid1", 00:29:57.125 "superblock": true, 00:29:57.125 "num_base_bdevs": 4, 00:29:57.125 "num_base_bdevs_discovered": 3, 00:29:57.125 "num_base_bdevs_operational": 3, 00:29:57.125 "process": { 00:29:57.125 "type": "rebuild", 00:29:57.125 "target": "spare", 00:29:57.125 "progress": { 00:29:57.125 "blocks": 38912, 00:29:57.125 "percent": 61 00:29:57.125 } 00:29:57.125 }, 00:29:57.125 "base_bdevs_list": [ 00:29:57.125 { 00:29:57.125 "name": "spare", 00:29:57.125 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:29:57.125 "is_configured": true, 00:29:57.125 "data_offset": 2048, 00:29:57.125 "data_size": 63488 00:29:57.125 }, 00:29:57.125 { 00:29:57.125 "name": null, 00:29:57.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.125 "is_configured": false, 00:29:57.125 "data_offset": 2048, 00:29:57.125 "data_size": 63488 00:29:57.125 }, 00:29:57.125 { 00:29:57.125 "name": "BaseBdev3", 00:29:57.125 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:57.125 "is_configured": true, 00:29:57.125 "data_offset": 2048, 00:29:57.125 "data_size": 63488 00:29:57.125 }, 00:29:57.125 { 00:29:57.125 "name": "BaseBdev4", 00:29:57.125 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:57.125 "is_configured": true, 00:29:57.125 "data_offset": 2048, 00:29:57.125 "data_size": 63488 00:29:57.125 } 00:29:57.125 ] 00:29:57.125 }' 00:29:57.125 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:57.125 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:57.125 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=942 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.384 11:41:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.384 11:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:57.384 "name": "raid_bdev1", 00:29:57.384 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:57.384 "strip_size_kb": 0, 00:29:57.384 "state": "online", 00:29:57.384 "raid_level": "raid1", 00:29:57.384 "superblock": true, 00:29:57.384 "num_base_bdevs": 4, 00:29:57.384 "num_base_bdevs_discovered": 3, 00:29:57.384 "num_base_bdevs_operational": 3, 00:29:57.384 "process": { 00:29:57.384 "type": "rebuild", 00:29:57.384 "target": "spare", 00:29:57.384 "progress": { 00:29:57.384 "blocks": 45056, 00:29:57.384 "percent": 70 00:29:57.384 } 00:29:57.384 }, 00:29:57.384 "base_bdevs_list": [ 00:29:57.384 { 00:29:57.384 "name": "spare", 00:29:57.384 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:29:57.384 "is_configured": true, 00:29:57.384 "data_offset": 2048, 00:29:57.384 "data_size": 63488 00:29:57.384 }, 00:29:57.384 { 00:29:57.384 "name": null, 00:29:57.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.384 "is_configured": false, 00:29:57.384 "data_offset": 2048, 00:29:57.384 "data_size": 63488 00:29:57.384 }, 00:29:57.384 { 00:29:57.384 "name": "BaseBdev3", 00:29:57.384 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:57.384 "is_configured": true, 00:29:57.384 "data_offset": 2048, 00:29:57.384 "data_size": 63488 00:29:57.384 }, 00:29:57.384 { 00:29:57.384 "name": "BaseBdev4", 00:29:57.384 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:57.384 "is_configured": true, 00:29:57.384 "data_offset": 2048, 00:29:57.384 "data_size": 63488 00:29:57.384 } 00:29:57.384 ] 00:29:57.384 }' 00:29:57.384 11:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:57.384 11:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:57.643 11:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:57.643 11:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:57.643 11:41:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:58.207 [2024-07-13 11:41:32.921079] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:58.207 [2024-07-13 11:41:32.921318] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:58.207 [2024-07-13 11:41:32.921560] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:58.463 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:58.464 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:58.464 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:58.464 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:58.464 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:58.464 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:58.464 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.464 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.721 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:58.721 "name": "raid_bdev1", 00:29:58.721 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:58.721 "strip_size_kb": 0, 00:29:58.721 "state": "online", 00:29:58.721 "raid_level": "raid1", 00:29:58.721 "superblock": true, 00:29:58.721 "num_base_bdevs": 4, 00:29:58.721 "num_base_bdevs_discovered": 3, 00:29:58.721 "num_base_bdevs_operational": 3, 00:29:58.721 "base_bdevs_list": [ 00:29:58.721 { 00:29:58.721 "name": "spare", 00:29:58.721 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:29:58.721 "is_configured": true, 00:29:58.721 "data_offset": 2048, 00:29:58.721 "data_size": 63488 00:29:58.721 }, 00:29:58.721 { 00:29:58.721 "name": null, 00:29:58.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.721 "is_configured": false, 00:29:58.721 "data_offset": 2048, 00:29:58.721 "data_size": 63488 00:29:58.721 }, 00:29:58.721 { 00:29:58.721 "name": "BaseBdev3", 00:29:58.721 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:58.721 "is_configured": true, 00:29:58.721 "data_offset": 2048, 00:29:58.721 "data_size": 63488 00:29:58.721 }, 00:29:58.721 { 00:29:58.721 "name": "BaseBdev4", 00:29:58.721 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:58.721 "is_configured": true, 00:29:58.721 "data_offset": 2048, 00:29:58.721 "data_size": 63488 00:29:58.721 } 00:29:58.721 ] 00:29:58.721 }' 00:29:58.721 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.978 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:59.236 "name": "raid_bdev1", 00:29:59.236 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:59.236 "strip_size_kb": 0, 00:29:59.236 "state": "online", 00:29:59.236 "raid_level": "raid1", 00:29:59.236 "superblock": true, 00:29:59.236 "num_base_bdevs": 4, 00:29:59.236 "num_base_bdevs_discovered": 3, 00:29:59.236 "num_base_bdevs_operational": 3, 00:29:59.236 "base_bdevs_list": [ 00:29:59.236 { 00:29:59.236 "name": "spare", 00:29:59.236 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:29:59.236 "is_configured": true, 00:29:59.236 "data_offset": 2048, 00:29:59.236 "data_size": 63488 00:29:59.236 }, 00:29:59.236 { 00:29:59.236 "name": null, 00:29:59.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.236 "is_configured": false, 00:29:59.236 "data_offset": 2048, 00:29:59.236 "data_size": 63488 00:29:59.236 }, 00:29:59.236 { 00:29:59.236 "name": "BaseBdev3", 00:29:59.236 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:59.236 "is_configured": true, 00:29:59.236 "data_offset": 2048, 00:29:59.236 "data_size": 63488 00:29:59.236 }, 00:29:59.236 { 00:29:59.236 "name": "BaseBdev4", 00:29:59.236 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:59.236 "is_configured": true, 00:29:59.236 "data_offset": 2048, 00:29:59.236 "data_size": 63488 00:29:59.236 } 00:29:59.236 ] 00:29:59.236 }' 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.236 11:41:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.494 11:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:59.494 "name": "raid_bdev1", 00:29:59.494 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:29:59.494 "strip_size_kb": 0, 00:29:59.494 "state": "online", 00:29:59.495 "raid_level": "raid1", 00:29:59.495 "superblock": true, 00:29:59.495 "num_base_bdevs": 4, 00:29:59.495 "num_base_bdevs_discovered": 3, 00:29:59.495 "num_base_bdevs_operational": 3, 00:29:59.495 "base_bdevs_list": [ 00:29:59.495 { 00:29:59.495 "name": "spare", 00:29:59.495 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:29:59.495 "is_configured": true, 00:29:59.495 "data_offset": 2048, 00:29:59.495 "data_size": 63488 00:29:59.495 }, 00:29:59.495 { 00:29:59.495 "name": null, 00:29:59.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.495 "is_configured": false, 00:29:59.495 "data_offset": 2048, 00:29:59.495 "data_size": 63488 00:29:59.495 }, 00:29:59.495 { 00:29:59.495 "name": "BaseBdev3", 00:29:59.495 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:29:59.495 "is_configured": true, 00:29:59.495 "data_offset": 2048, 00:29:59.495 "data_size": 63488 00:29:59.495 }, 00:29:59.495 { 00:29:59.495 "name": "BaseBdev4", 00:29:59.495 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:29:59.495 "is_configured": true, 00:29:59.495 "data_offset": 2048, 00:29:59.495 "data_size": 63488 00:29:59.495 } 00:29:59.495 ] 00:29:59.495 }' 00:29:59.495 11:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:59.495 11:41:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:00.061 11:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:00.320 [2024-07-13 11:41:34.892432] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:00.320 [2024-07-13 11:41:34.892579] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:00.320 [2024-07-13 11:41:34.892771] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:00.320 [2024-07-13 11:41:34.892963] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:00.320 [2024-07-13 11:41:34.893090] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:30:00.320 11:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.320 11:41:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:00.579 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:00.837 /dev/nbd0 00:30:00.837 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:00.838 1+0 records in 00:30:00.838 1+0 records out 00:30:00.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298321 s, 13.7 MB/s 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:00.838 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:01.097 /dev/nbd1 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:01.097 1+0 records in 00:30:01.097 1+0 records out 00:30:01.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537191 s, 7.6 MB/s 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.097 11:41:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:01.356 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:01.356 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:01.356 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:01.356 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:01.356 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:01.356 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:01.356 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:01.613 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:01.613 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:01.613 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:01.613 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:01.613 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:01.613 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:01.613 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:01.870 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:01.871 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:01.871 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:01.871 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:01.871 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:01.871 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:01.871 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:01.871 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:01.871 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:01.871 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:02.128 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:02.386 [2024-07-13 11:41:36.892196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:02.386 [2024-07-13 11:41:36.892422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:02.386 [2024-07-13 11:41:36.892506] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:30:02.386 [2024-07-13 11:41:36.892802] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:02.386 [2024-07-13 11:41:36.895366] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:02.386 [2024-07-13 11:41:36.895542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:02.386 [2024-07-13 11:41:36.895736] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:02.386 [2024-07-13 11:41:36.895896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:02.386 [2024-07-13 11:41:36.896160] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:02.386 [2024-07-13 11:41:36.896383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:02.386 spare 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:02.386 11:41:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.386 [2024-07-13 11:41:36.996616] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:30:02.386 [2024-07-13 11:41:36.996749] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:02.386 [2024-07-13 11:41:36.996904] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5f20 00:30:02.386 [2024-07-13 11:41:36.997564] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:30:02.386 [2024-07-13 11:41:36.997683] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:30:02.386 [2024-07-13 11:41:36.997908] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:02.644 11:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:02.644 "name": "raid_bdev1", 00:30:02.644 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:02.644 "strip_size_kb": 0, 00:30:02.644 "state": "online", 00:30:02.644 "raid_level": "raid1", 00:30:02.644 "superblock": true, 00:30:02.644 "num_base_bdevs": 4, 00:30:02.644 "num_base_bdevs_discovered": 3, 00:30:02.644 "num_base_bdevs_operational": 3, 00:30:02.644 "base_bdevs_list": [ 00:30:02.644 { 00:30:02.644 "name": "spare", 00:30:02.644 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:30:02.644 "is_configured": true, 00:30:02.644 "data_offset": 2048, 00:30:02.644 "data_size": 63488 00:30:02.644 }, 00:30:02.644 { 00:30:02.644 "name": null, 00:30:02.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.644 "is_configured": false, 00:30:02.644 "data_offset": 2048, 00:30:02.644 "data_size": 63488 00:30:02.644 }, 00:30:02.644 { 00:30:02.644 "name": "BaseBdev3", 00:30:02.644 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:02.644 "is_configured": true, 00:30:02.644 "data_offset": 2048, 00:30:02.644 "data_size": 63488 00:30:02.644 }, 00:30:02.644 { 00:30:02.644 "name": "BaseBdev4", 00:30:02.644 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:02.644 "is_configured": true, 00:30:02.644 "data_offset": 2048, 00:30:02.644 "data_size": 63488 00:30:02.644 } 00:30:02.644 ] 00:30:02.644 }' 00:30:02.644 11:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:02.644 11:41:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:03.211 11:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:03.211 11:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:03.211 11:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:03.211 11:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:03.211 11:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:03.211 11:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.211 11:41:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.470 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:03.470 "name": "raid_bdev1", 00:30:03.470 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:03.470 "strip_size_kb": 0, 00:30:03.470 "state": "online", 00:30:03.470 "raid_level": "raid1", 00:30:03.470 "superblock": true, 00:30:03.470 "num_base_bdevs": 4, 00:30:03.470 "num_base_bdevs_discovered": 3, 00:30:03.470 "num_base_bdevs_operational": 3, 00:30:03.470 "base_bdevs_list": [ 00:30:03.470 { 00:30:03.470 "name": "spare", 00:30:03.470 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:30:03.470 "is_configured": true, 00:30:03.470 "data_offset": 2048, 00:30:03.470 "data_size": 63488 00:30:03.470 }, 00:30:03.470 { 00:30:03.470 "name": null, 00:30:03.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.470 "is_configured": false, 00:30:03.470 "data_offset": 2048, 00:30:03.470 "data_size": 63488 00:30:03.470 }, 00:30:03.470 { 00:30:03.470 "name": "BaseBdev3", 00:30:03.470 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:03.470 "is_configured": true, 00:30:03.470 "data_offset": 2048, 00:30:03.470 "data_size": 63488 00:30:03.470 }, 00:30:03.470 { 00:30:03.470 "name": "BaseBdev4", 00:30:03.470 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:03.470 "is_configured": true, 00:30:03.470 "data_offset": 2048, 00:30:03.470 "data_size": 63488 00:30:03.470 } 00:30:03.470 ] 00:30:03.470 }' 00:30:03.470 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:03.470 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:03.470 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:03.470 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:03.470 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.470 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:03.729 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:03.729 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:03.988 [2024-07-13 11:41:38.572840] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.988 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.246 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:04.246 "name": "raid_bdev1", 00:30:04.246 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:04.246 "strip_size_kb": 0, 00:30:04.246 "state": "online", 00:30:04.246 "raid_level": "raid1", 00:30:04.246 "superblock": true, 00:30:04.246 "num_base_bdevs": 4, 00:30:04.246 "num_base_bdevs_discovered": 2, 00:30:04.246 "num_base_bdevs_operational": 2, 00:30:04.246 "base_bdevs_list": [ 00:30:04.246 { 00:30:04.246 "name": null, 00:30:04.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.246 "is_configured": false, 00:30:04.246 "data_offset": 2048, 00:30:04.246 "data_size": 63488 00:30:04.246 }, 00:30:04.246 { 00:30:04.246 "name": null, 00:30:04.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.246 "is_configured": false, 00:30:04.246 "data_offset": 2048, 00:30:04.246 "data_size": 63488 00:30:04.246 }, 00:30:04.246 { 00:30:04.246 "name": "BaseBdev3", 00:30:04.246 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:04.246 "is_configured": true, 00:30:04.246 "data_offset": 2048, 00:30:04.246 "data_size": 63488 00:30:04.246 }, 00:30:04.246 { 00:30:04.246 "name": "BaseBdev4", 00:30:04.246 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:04.246 "is_configured": true, 00:30:04.246 "data_offset": 2048, 00:30:04.246 "data_size": 63488 00:30:04.246 } 00:30:04.246 ] 00:30:04.246 }' 00:30:04.246 11:41:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:04.246 11:41:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.821 11:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:05.080 [2024-07-13 11:41:39.717087] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:05.080 [2024-07-13 11:41:39.717333] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:05.080 [2024-07-13 11:41:39.717448] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:05.080 [2024-07-13 11:41:39.717535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:05.080 [2024-07-13 11:41:39.727628] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc60c0 00:30:05.080 [2024-07-13 11:41:39.729666] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:05.080 11:41:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:30:06.014 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:06.014 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:06.014 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:06.014 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:06.014 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:06.014 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.014 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.272 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:06.272 "name": "raid_bdev1", 00:30:06.272 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:06.272 "strip_size_kb": 0, 00:30:06.272 "state": "online", 00:30:06.272 "raid_level": "raid1", 00:30:06.272 "superblock": true, 00:30:06.272 "num_base_bdevs": 4, 00:30:06.272 "num_base_bdevs_discovered": 3, 00:30:06.272 "num_base_bdevs_operational": 3, 00:30:06.272 "process": { 00:30:06.272 "type": "rebuild", 00:30:06.272 "target": "spare", 00:30:06.272 "progress": { 00:30:06.272 "blocks": 22528, 00:30:06.272 "percent": 35 00:30:06.272 } 00:30:06.272 }, 00:30:06.272 "base_bdevs_list": [ 00:30:06.272 { 00:30:06.272 "name": "spare", 00:30:06.272 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:30:06.272 "is_configured": true, 00:30:06.272 "data_offset": 2048, 00:30:06.272 "data_size": 63488 00:30:06.272 }, 00:30:06.272 { 00:30:06.272 "name": null, 00:30:06.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.272 "is_configured": false, 00:30:06.272 "data_offset": 2048, 00:30:06.272 "data_size": 63488 00:30:06.272 }, 00:30:06.272 { 00:30:06.272 "name": "BaseBdev3", 00:30:06.272 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:06.272 "is_configured": true, 00:30:06.272 "data_offset": 2048, 00:30:06.272 "data_size": 63488 00:30:06.272 }, 00:30:06.272 { 00:30:06.273 "name": "BaseBdev4", 00:30:06.273 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:06.273 "is_configured": true, 00:30:06.273 "data_offset": 2048, 00:30:06.273 "data_size": 63488 00:30:06.273 } 00:30:06.273 ] 00:30:06.273 }' 00:30:06.273 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:06.273 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:06.273 11:41:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:06.273 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:06.273 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:06.530 [2024-07-13 11:41:41.272317] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:06.789 [2024-07-13 11:41:41.339747] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:06.789 [2024-07-13 11:41:41.339935] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:06.789 [2024-07-13 11:41:41.339986] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:06.789 [2024-07-13 11:41:41.340123] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.789 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.047 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:07.047 "name": "raid_bdev1", 00:30:07.047 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:07.047 "strip_size_kb": 0, 00:30:07.047 "state": "online", 00:30:07.047 "raid_level": "raid1", 00:30:07.047 "superblock": true, 00:30:07.047 "num_base_bdevs": 4, 00:30:07.047 "num_base_bdevs_discovered": 2, 00:30:07.047 "num_base_bdevs_operational": 2, 00:30:07.047 "base_bdevs_list": [ 00:30:07.047 { 00:30:07.047 "name": null, 00:30:07.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.047 "is_configured": false, 00:30:07.047 "data_offset": 2048, 00:30:07.047 "data_size": 63488 00:30:07.047 }, 00:30:07.047 { 00:30:07.047 "name": null, 00:30:07.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.047 "is_configured": false, 00:30:07.047 "data_offset": 2048, 00:30:07.047 "data_size": 63488 00:30:07.047 }, 00:30:07.047 { 00:30:07.047 "name": "BaseBdev3", 00:30:07.047 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:07.047 "is_configured": true, 00:30:07.047 "data_offset": 2048, 00:30:07.047 "data_size": 63488 00:30:07.047 }, 00:30:07.047 { 00:30:07.047 "name": "BaseBdev4", 00:30:07.047 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:07.047 "is_configured": true, 00:30:07.047 "data_offset": 2048, 00:30:07.047 "data_size": 63488 00:30:07.047 } 00:30:07.047 ] 00:30:07.047 }' 00:30:07.047 11:41:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:07.047 11:41:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.613 11:41:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:07.872 [2024-07-13 11:41:42.457955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:07.872 [2024-07-13 11:41:42.458141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:07.872 [2024-07-13 11:41:42.458212] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:30:07.872 [2024-07-13 11:41:42.458335] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:07.872 [2024-07-13 11:41:42.458907] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:07.872 [2024-07-13 11:41:42.459060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:07.872 [2024-07-13 11:41:42.459272] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:07.872 [2024-07-13 11:41:42.459385] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:07.872 [2024-07-13 11:41:42.459474] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:07.872 [2024-07-13 11:41:42.459548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:07.872 [2024-07-13 11:41:42.468888] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc6400 00:30:07.872 spare 00:30:07.872 [2024-07-13 11:41:42.471016] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:07.872 11:41:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:08.806 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:08.806 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:08.806 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:08.806 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:08.806 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:08.806 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.806 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.064 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.064 "name": "raid_bdev1", 00:30:09.064 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:09.064 "strip_size_kb": 0, 00:30:09.064 "state": "online", 00:30:09.064 "raid_level": "raid1", 00:30:09.064 "superblock": true, 00:30:09.064 "num_base_bdevs": 4, 00:30:09.064 "num_base_bdevs_discovered": 3, 00:30:09.064 "num_base_bdevs_operational": 3, 00:30:09.064 "process": { 00:30:09.064 "type": "rebuild", 00:30:09.064 "target": "spare", 00:30:09.064 "progress": { 00:30:09.064 "blocks": 24576, 00:30:09.064 "percent": 38 00:30:09.064 } 00:30:09.064 }, 00:30:09.064 "base_bdevs_list": [ 00:30:09.064 { 00:30:09.064 "name": "spare", 00:30:09.064 "uuid": "fa1befb6-428c-5c4d-b325-c127ed2db840", 00:30:09.064 "is_configured": true, 00:30:09.064 "data_offset": 2048, 00:30:09.064 "data_size": 63488 00:30:09.064 }, 00:30:09.064 { 00:30:09.064 "name": null, 00:30:09.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.064 "is_configured": false, 00:30:09.064 "data_offset": 2048, 00:30:09.064 "data_size": 63488 00:30:09.064 }, 00:30:09.064 { 00:30:09.064 "name": "BaseBdev3", 00:30:09.064 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:09.064 "is_configured": true, 00:30:09.064 "data_offset": 2048, 00:30:09.064 "data_size": 63488 00:30:09.064 }, 00:30:09.064 { 00:30:09.064 "name": "BaseBdev4", 00:30:09.064 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:09.064 "is_configured": true, 00:30:09.064 "data_offset": 2048, 00:30:09.064 "data_size": 63488 00:30:09.064 } 00:30:09.064 ] 00:30:09.064 }' 00:30:09.064 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:09.064 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:09.064 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:09.322 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:09.322 11:41:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:09.322 [2024-07-13 11:41:44.053300] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:09.590 [2024-07-13 11:41:44.079811] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:09.590 [2024-07-13 11:41:44.080000] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:09.590 [2024-07-13 11:41:44.080051] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:09.591 [2024-07-13 11:41:44.080171] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.591 "name": "raid_bdev1", 00:30:09.591 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:09.591 "strip_size_kb": 0, 00:30:09.591 "state": "online", 00:30:09.591 "raid_level": "raid1", 00:30:09.591 "superblock": true, 00:30:09.591 "num_base_bdevs": 4, 00:30:09.591 "num_base_bdevs_discovered": 2, 00:30:09.591 "num_base_bdevs_operational": 2, 00:30:09.591 "base_bdevs_list": [ 00:30:09.591 { 00:30:09.591 "name": null, 00:30:09.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.591 "is_configured": false, 00:30:09.591 "data_offset": 2048, 00:30:09.591 "data_size": 63488 00:30:09.591 }, 00:30:09.591 { 00:30:09.591 "name": null, 00:30:09.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.591 "is_configured": false, 00:30:09.591 "data_offset": 2048, 00:30:09.591 "data_size": 63488 00:30:09.591 }, 00:30:09.591 { 00:30:09.591 "name": "BaseBdev3", 00:30:09.591 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:09.591 "is_configured": true, 00:30:09.591 "data_offset": 2048, 00:30:09.591 "data_size": 63488 00:30:09.591 }, 00:30:09.591 { 00:30:09.591 "name": "BaseBdev4", 00:30:09.591 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:09.591 "is_configured": true, 00:30:09.591 "data_offset": 2048, 00:30:09.591 "data_size": 63488 00:30:09.591 } 00:30:09.591 ] 00:30:09.591 }' 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.591 11:41:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.589 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:10.589 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:10.589 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:10.589 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:10.589 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:10.589 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.589 11:41:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.589 11:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:10.589 "name": "raid_bdev1", 00:30:10.589 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:10.589 "strip_size_kb": 0, 00:30:10.589 "state": "online", 00:30:10.589 "raid_level": "raid1", 00:30:10.589 "superblock": true, 00:30:10.589 "num_base_bdevs": 4, 00:30:10.589 "num_base_bdevs_discovered": 2, 00:30:10.589 "num_base_bdevs_operational": 2, 00:30:10.589 "base_bdevs_list": [ 00:30:10.589 { 00:30:10.589 "name": null, 00:30:10.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.589 "is_configured": false, 00:30:10.589 "data_offset": 2048, 00:30:10.589 "data_size": 63488 00:30:10.589 }, 00:30:10.589 { 00:30:10.589 "name": null, 00:30:10.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.589 "is_configured": false, 00:30:10.589 "data_offset": 2048, 00:30:10.589 "data_size": 63488 00:30:10.589 }, 00:30:10.589 { 00:30:10.589 "name": "BaseBdev3", 00:30:10.589 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:10.589 "is_configured": true, 00:30:10.589 "data_offset": 2048, 00:30:10.589 "data_size": 63488 00:30:10.589 }, 00:30:10.589 { 00:30:10.589 "name": "BaseBdev4", 00:30:10.589 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:10.589 "is_configured": true, 00:30:10.589 "data_offset": 2048, 00:30:10.589 "data_size": 63488 00:30:10.589 } 00:30:10.589 ] 00:30:10.589 }' 00:30:10.589 11:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:10.589 11:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:10.589 11:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:10.847 11:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:10.847 11:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:10.847 11:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:11.105 [2024-07-13 11:41:45.782334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:11.105 [2024-07-13 11:41:45.782540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:11.105 [2024-07-13 11:41:45.782620] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:30:11.105 [2024-07-13 11:41:45.782926] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:11.105 [2024-07-13 11:41:45.783498] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:11.105 [2024-07-13 11:41:45.783669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:11.105 [2024-07-13 11:41:45.783904] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:11.105 [2024-07-13 11:41:45.784019] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:11.105 [2024-07-13 11:41:45.784116] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:11.105 BaseBdev1 00:30:11.105 11:41:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.480 11:41:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.480 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:12.480 "name": "raid_bdev1", 00:30:12.480 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:12.480 "strip_size_kb": 0, 00:30:12.480 "state": "online", 00:30:12.480 "raid_level": "raid1", 00:30:12.480 "superblock": true, 00:30:12.480 "num_base_bdevs": 4, 00:30:12.480 "num_base_bdevs_discovered": 2, 00:30:12.480 "num_base_bdevs_operational": 2, 00:30:12.480 "base_bdevs_list": [ 00:30:12.480 { 00:30:12.480 "name": null, 00:30:12.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.480 "is_configured": false, 00:30:12.480 "data_offset": 2048, 00:30:12.480 "data_size": 63488 00:30:12.480 }, 00:30:12.480 { 00:30:12.480 "name": null, 00:30:12.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.480 "is_configured": false, 00:30:12.480 "data_offset": 2048, 00:30:12.480 "data_size": 63488 00:30:12.480 }, 00:30:12.480 { 00:30:12.480 "name": "BaseBdev3", 00:30:12.480 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:12.480 "is_configured": true, 00:30:12.480 "data_offset": 2048, 00:30:12.480 "data_size": 63488 00:30:12.480 }, 00:30:12.480 { 00:30:12.480 "name": "BaseBdev4", 00:30:12.480 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:12.480 "is_configured": true, 00:30:12.480 "data_offset": 2048, 00:30:12.480 "data_size": 63488 00:30:12.480 } 00:30:12.480 ] 00:30:12.480 }' 00:30:12.480 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:12.480 11:41:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.047 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:13.047 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:13.047 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:13.047 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:13.047 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:13.047 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.047 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:13.306 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:13.306 "name": "raid_bdev1", 00:30:13.306 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:13.306 "strip_size_kb": 0, 00:30:13.306 "state": "online", 00:30:13.306 "raid_level": "raid1", 00:30:13.306 "superblock": true, 00:30:13.306 "num_base_bdevs": 4, 00:30:13.306 "num_base_bdevs_discovered": 2, 00:30:13.306 "num_base_bdevs_operational": 2, 00:30:13.306 "base_bdevs_list": [ 00:30:13.306 { 00:30:13.306 "name": null, 00:30:13.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.306 "is_configured": false, 00:30:13.306 "data_offset": 2048, 00:30:13.306 "data_size": 63488 00:30:13.306 }, 00:30:13.306 { 00:30:13.306 "name": null, 00:30:13.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:13.306 "is_configured": false, 00:30:13.306 "data_offset": 2048, 00:30:13.306 "data_size": 63488 00:30:13.306 }, 00:30:13.306 { 00:30:13.306 "name": "BaseBdev3", 00:30:13.306 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:13.306 "is_configured": true, 00:30:13.306 "data_offset": 2048, 00:30:13.306 "data_size": 63488 00:30:13.306 }, 00:30:13.306 { 00:30:13.306 "name": "BaseBdev4", 00:30:13.306 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:13.306 "is_configured": true, 00:30:13.306 "data_offset": 2048, 00:30:13.306 "data_size": 63488 00:30:13.306 } 00:30:13.306 ] 00:30:13.306 }' 00:30:13.306 11:41:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:13.306 11:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:13.306 11:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:13.564 [2024-07-13 11:41:48.268677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:13.564 [2024-07-13 11:41:48.268911] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:13.564 [2024-07-13 11:41:48.269024] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:13.564 request: 00:30:13.564 { 00:30:13.564 "base_bdev": "BaseBdev1", 00:30:13.564 "raid_bdev": "raid_bdev1", 00:30:13.564 "method": "bdev_raid_add_base_bdev", 00:30:13.564 "req_id": 1 00:30:13.564 } 00:30:13.564 Got JSON-RPC error response 00:30:13.564 response: 00:30:13.564 { 00:30:13.564 "code": -22, 00:30:13.564 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:13.564 } 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:13.564 11:41:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:14.940 "name": "raid_bdev1", 00:30:14.940 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:14.940 "strip_size_kb": 0, 00:30:14.940 "state": "online", 00:30:14.940 "raid_level": "raid1", 00:30:14.940 "superblock": true, 00:30:14.940 "num_base_bdevs": 4, 00:30:14.940 "num_base_bdevs_discovered": 2, 00:30:14.940 "num_base_bdevs_operational": 2, 00:30:14.940 "base_bdevs_list": [ 00:30:14.940 { 00:30:14.940 "name": null, 00:30:14.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.940 "is_configured": false, 00:30:14.940 "data_offset": 2048, 00:30:14.940 "data_size": 63488 00:30:14.940 }, 00:30:14.940 { 00:30:14.940 "name": null, 00:30:14.940 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:14.940 "is_configured": false, 00:30:14.940 "data_offset": 2048, 00:30:14.940 "data_size": 63488 00:30:14.940 }, 00:30:14.940 { 00:30:14.940 "name": "BaseBdev3", 00:30:14.940 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:14.940 "is_configured": true, 00:30:14.940 "data_offset": 2048, 00:30:14.940 "data_size": 63488 00:30:14.940 }, 00:30:14.940 { 00:30:14.940 "name": "BaseBdev4", 00:30:14.940 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:14.940 "is_configured": true, 00:30:14.940 "data_offset": 2048, 00:30:14.940 "data_size": 63488 00:30:14.940 } 00:30:14.940 ] 00:30:14.940 }' 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:14.940 11:41:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.507 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:15.507 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:15.507 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:15.507 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:15.507 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:15.507 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.507 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:15.766 "name": "raid_bdev1", 00:30:15.766 "uuid": "a1cc7c76-68f2-42c6-af57-f7aeb7e39a3e", 00:30:15.766 "strip_size_kb": 0, 00:30:15.766 "state": "online", 00:30:15.766 "raid_level": "raid1", 00:30:15.766 "superblock": true, 00:30:15.766 "num_base_bdevs": 4, 00:30:15.766 "num_base_bdevs_discovered": 2, 00:30:15.766 "num_base_bdevs_operational": 2, 00:30:15.766 "base_bdevs_list": [ 00:30:15.766 { 00:30:15.766 "name": null, 00:30:15.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.766 "is_configured": false, 00:30:15.766 "data_offset": 2048, 00:30:15.766 "data_size": 63488 00:30:15.766 }, 00:30:15.766 { 00:30:15.766 "name": null, 00:30:15.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.766 "is_configured": false, 00:30:15.766 "data_offset": 2048, 00:30:15.766 "data_size": 63488 00:30:15.766 }, 00:30:15.766 { 00:30:15.766 "name": "BaseBdev3", 00:30:15.766 "uuid": "80b1ffd4-0b51-5e8e-b3e4-0b6ed3557499", 00:30:15.766 "is_configured": true, 00:30:15.766 "data_offset": 2048, 00:30:15.766 "data_size": 63488 00:30:15.766 }, 00:30:15.766 { 00:30:15.766 "name": "BaseBdev4", 00:30:15.766 "uuid": "e22a8ca9-94e4-5f66-ae93-bb453156db60", 00:30:15.766 "is_configured": true, 00:30:15.766 "data_offset": 2048, 00:30:15.766 "data_size": 63488 00:30:15.766 } 00:30:15.766 ] 00:30:15.766 }' 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 148407 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 148407 ']' 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 148407 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:15.766 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 148407 00:30:16.025 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:16.025 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:16.025 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 148407' 00:30:16.025 killing process with pid 148407 00:30:16.025 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 148407 00:30:16.025 Received shutdown signal, test time was about 60.000000 seconds 00:30:16.025 00:30:16.025 Latency(us) 00:30:16.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.025 =================================================================================================================== 00:30:16.025 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:16.025 11:41:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 148407 00:30:16.025 [2024-07-13 11:41:50.522466] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:16.025 [2024-07-13 11:41:50.522584] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:16.025 [2024-07-13 11:41:50.522674] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:16.025 [2024-07-13 11:41:50.522766] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:30:16.285 [2024-07-13 11:41:50.854386] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:17.222 ************************************ 00:30:17.222 END TEST raid_rebuild_test_sb 00:30:17.222 ************************************ 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:30:17.222 00:30:17.222 real 0m38.393s 00:30:17.222 user 0m58.231s 00:30:17.222 sys 0m4.664s 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.222 11:41:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:17.222 11:41:51 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:30:17.222 11:41:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:17.222 11:41:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:17.222 11:41:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:17.222 ************************************ 00:30:17.222 START TEST raid_rebuild_test_io 00:30:17.222 ************************************ 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:17.222 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=149430 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 149430 /var/tmp/spdk-raid.sock 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 149430 ']' 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:17.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:17.223 11:41:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.482 [2024-07-13 11:41:52.010596] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:17.482 [2024-07-13 11:41:52.011009] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149430 ] 00:30:17.482 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:17.482 Zero copy mechanism will not be used. 00:30:17.482 [2024-07-13 11:41:52.162100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.741 [2024-07-13 11:41:52.346153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.999 [2024-07-13 11:41:52.532018] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:18.269 11:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.269 11:41:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:30:18.269 11:41:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:18.269 11:41:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:18.527 BaseBdev1_malloc 00:30:18.527 11:41:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:18.785 [2024-07-13 11:41:53.394875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:18.785 [2024-07-13 11:41:53.395249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.785 [2024-07-13 11:41:53.395319] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:30:18.785 [2024-07-13 11:41:53.395590] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.785 [2024-07-13 11:41:53.397850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.785 [2024-07-13 11:41:53.398019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:18.785 BaseBdev1 00:30:18.785 11:41:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:18.785 11:41:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:19.042 BaseBdev2_malloc 00:30:19.042 11:41:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:19.300 [2024-07-13 11:41:53.859537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:19.301 [2024-07-13 11:41:53.859772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:19.301 [2024-07-13 11:41:53.859918] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:30:19.301 [2024-07-13 11:41:53.860030] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:19.301 [2024-07-13 11:41:53.862301] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:19.301 [2024-07-13 11:41:53.862465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:19.301 BaseBdev2 00:30:19.301 11:41:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:19.301 11:41:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:19.558 BaseBdev3_malloc 00:30:19.559 11:41:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:19.817 [2024-07-13 11:41:54.328586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:19.817 [2024-07-13 11:41:54.328808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:19.817 [2024-07-13 11:41:54.328880] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:30:19.817 [2024-07-13 11:41:54.329158] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:19.817 [2024-07-13 11:41:54.331535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:19.817 [2024-07-13 11:41:54.331700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:19.817 BaseBdev3 00:30:19.817 11:41:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:19.817 11:41:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:19.817 BaseBdev4_malloc 00:30:19.817 11:41:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:20.076 [2024-07-13 11:41:54.741341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:20.076 [2024-07-13 11:41:54.741567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:20.076 [2024-07-13 11:41:54.741635] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:20.076 [2024-07-13 11:41:54.741757] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:20.076 [2024-07-13 11:41:54.744012] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:20.076 [2024-07-13 11:41:54.744181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:20.076 BaseBdev4 00:30:20.076 11:41:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:20.334 spare_malloc 00:30:20.334 11:41:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:20.593 spare_delay 00:30:20.593 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:20.593 [2024-07-13 11:41:55.342060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:20.593 [2024-07-13 11:41:55.342319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:20.593 [2024-07-13 11:41:55.342386] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:30:20.593 [2024-07-13 11:41:55.342712] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:20.593 [2024-07-13 11:41:55.345410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:20.593 [2024-07-13 11:41:55.345605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:20.852 spare 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:20.852 [2024-07-13 11:41:55.534296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:20.852 [2024-07-13 11:41:55.536312] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:20.852 [2024-07-13 11:41:55.536514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:20.852 [2024-07-13 11:41:55.536612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:20.852 [2024-07-13 11:41:55.536825] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:30:20.852 [2024-07-13 11:41:55.536941] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:20.852 [2024-07-13 11:41:55.537112] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:20.852 [2024-07-13 11:41:55.537606] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:30:20.852 [2024-07-13 11:41:55.537725] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:30:20.852 [2024-07-13 11:41:55.537977] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.852 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.111 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:21.111 "name": "raid_bdev1", 00:30:21.111 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:21.111 "strip_size_kb": 0, 00:30:21.111 "state": "online", 00:30:21.111 "raid_level": "raid1", 00:30:21.111 "superblock": false, 00:30:21.111 "num_base_bdevs": 4, 00:30:21.111 "num_base_bdevs_discovered": 4, 00:30:21.111 "num_base_bdevs_operational": 4, 00:30:21.111 "base_bdevs_list": [ 00:30:21.111 { 00:30:21.111 "name": "BaseBdev1", 00:30:21.111 "uuid": "88d15d46-c185-5c44-af33-3c4849d49f68", 00:30:21.111 "is_configured": true, 00:30:21.111 "data_offset": 0, 00:30:21.111 "data_size": 65536 00:30:21.111 }, 00:30:21.111 { 00:30:21.111 "name": "BaseBdev2", 00:30:21.111 "uuid": "996cc497-3d91-571a-ba75-d49aeeaaf8a6", 00:30:21.111 "is_configured": true, 00:30:21.111 "data_offset": 0, 00:30:21.111 "data_size": 65536 00:30:21.111 }, 00:30:21.111 { 00:30:21.111 "name": "BaseBdev3", 00:30:21.111 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:21.111 "is_configured": true, 00:30:21.111 "data_offset": 0, 00:30:21.111 "data_size": 65536 00:30:21.111 }, 00:30:21.111 { 00:30:21.111 "name": "BaseBdev4", 00:30:21.111 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:21.111 "is_configured": true, 00:30:21.111 "data_offset": 0, 00:30:21.111 "data_size": 65536 00:30:21.111 } 00:30:21.111 ] 00:30:21.111 }' 00:30:21.111 11:41:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:21.111 11:41:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:21.678 11:41:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:21.678 11:41:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:21.937 [2024-07-13 11:41:56.618699] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:21.937 11:41:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:30:21.937 11:41:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.937 11:41:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:22.196 11:41:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:30:22.196 11:41:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:30:22.196 11:41:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:22.196 11:41:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:22.455 [2024-07-13 11:41:56.981804] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:22.455 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:22.455 Zero copy mechanism will not be used. 00:30:22.455 Running I/O for 60 seconds... 00:30:22.455 [2024-07-13 11:41:57.113110] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:22.455 [2024-07-13 11:41:57.119371] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.455 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.714 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:22.714 "name": "raid_bdev1", 00:30:22.714 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:22.714 "strip_size_kb": 0, 00:30:22.714 "state": "online", 00:30:22.714 "raid_level": "raid1", 00:30:22.714 "superblock": false, 00:30:22.714 "num_base_bdevs": 4, 00:30:22.714 "num_base_bdevs_discovered": 3, 00:30:22.714 "num_base_bdevs_operational": 3, 00:30:22.714 "base_bdevs_list": [ 00:30:22.714 { 00:30:22.714 "name": null, 00:30:22.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.714 "is_configured": false, 00:30:22.714 "data_offset": 0, 00:30:22.714 "data_size": 65536 00:30:22.714 }, 00:30:22.714 { 00:30:22.714 "name": "BaseBdev2", 00:30:22.714 "uuid": "996cc497-3d91-571a-ba75-d49aeeaaf8a6", 00:30:22.714 "is_configured": true, 00:30:22.714 "data_offset": 0, 00:30:22.714 "data_size": 65536 00:30:22.714 }, 00:30:22.714 { 00:30:22.714 "name": "BaseBdev3", 00:30:22.714 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:22.714 "is_configured": true, 00:30:22.714 "data_offset": 0, 00:30:22.714 "data_size": 65536 00:30:22.714 }, 00:30:22.714 { 00:30:22.714 "name": "BaseBdev4", 00:30:22.714 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:22.714 "is_configured": true, 00:30:22.714 "data_offset": 0, 00:30:22.714 "data_size": 65536 00:30:22.714 } 00:30:22.714 ] 00:30:22.714 }' 00:30:22.714 11:41:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:22.714 11:41:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:23.650 11:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:23.650 [2024-07-13 11:41:58.209744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:23.650 11:41:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:23.650 [2024-07-13 11:41:58.255843] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:30:23.650 [2024-07-13 11:41:58.258194] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:23.650 [2024-07-13 11:41:58.389767] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:23.650 [2024-07-13 11:41:58.391537] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:23.908 [2024-07-13 11:41:58.616357] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:23.908 [2024-07-13 11:41:58.617465] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:24.474 [2024-07-13 11:41:58.963785] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:24.474 [2024-07-13 11:41:58.964595] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:24.474 [2024-07-13 11:41:59.082844] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:24.733 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:24.733 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:24.733 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:24.733 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:24.733 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:24.733 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.733 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.733 [2024-07-13 11:41:59.427389] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:24.733 [2024-07-13 11:41:59.428173] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:24.991 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:24.991 "name": "raid_bdev1", 00:30:24.991 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:24.991 "strip_size_kb": 0, 00:30:24.991 "state": "online", 00:30:24.991 "raid_level": "raid1", 00:30:24.991 "superblock": false, 00:30:24.991 "num_base_bdevs": 4, 00:30:24.991 "num_base_bdevs_discovered": 4, 00:30:24.991 "num_base_bdevs_operational": 4, 00:30:24.991 "process": { 00:30:24.991 "type": "rebuild", 00:30:24.991 "target": "spare", 00:30:24.991 "progress": { 00:30:24.991 "blocks": 14336, 00:30:24.991 "percent": 21 00:30:24.991 } 00:30:24.991 }, 00:30:24.991 "base_bdevs_list": [ 00:30:24.991 { 00:30:24.991 "name": "spare", 00:30:24.991 "uuid": "554a3f11-d38d-5188-abab-f6eae001a921", 00:30:24.991 "is_configured": true, 00:30:24.991 "data_offset": 0, 00:30:24.991 "data_size": 65536 00:30:24.991 }, 00:30:24.991 { 00:30:24.991 "name": "BaseBdev2", 00:30:24.991 "uuid": "996cc497-3d91-571a-ba75-d49aeeaaf8a6", 00:30:24.991 "is_configured": true, 00:30:24.991 "data_offset": 0, 00:30:24.991 "data_size": 65536 00:30:24.991 }, 00:30:24.991 { 00:30:24.991 "name": "BaseBdev3", 00:30:24.991 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:24.991 "is_configured": true, 00:30:24.991 "data_offset": 0, 00:30:24.991 "data_size": 65536 00:30:24.991 }, 00:30:24.991 { 00:30:24.991 "name": "BaseBdev4", 00:30:24.991 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:24.991 "is_configured": true, 00:30:24.991 "data_offset": 0, 00:30:24.991 "data_size": 65536 00:30:24.991 } 00:30:24.991 ] 00:30:24.991 }' 00:30:24.991 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:24.991 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:24.991 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:24.991 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:24.991 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:25.249 [2024-07-13 11:41:59.801659] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:25.249 [2024-07-13 11:41:59.933543] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:25.249 [2024-07-13 11:41:59.949885] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:25.249 [2024-07-13 11:41:59.950067] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:25.249 [2024-07-13 11:41:59.950109] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:25.249 [2024-07-13 11:41:59.977407] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.249 11:41:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.509 11:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:25.509 "name": "raid_bdev1", 00:30:25.509 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:25.509 "strip_size_kb": 0, 00:30:25.509 "state": "online", 00:30:25.509 "raid_level": "raid1", 00:30:25.509 "superblock": false, 00:30:25.509 "num_base_bdevs": 4, 00:30:25.509 "num_base_bdevs_discovered": 3, 00:30:25.509 "num_base_bdevs_operational": 3, 00:30:25.509 "base_bdevs_list": [ 00:30:25.509 { 00:30:25.509 "name": null, 00:30:25.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:25.509 "is_configured": false, 00:30:25.509 "data_offset": 0, 00:30:25.509 "data_size": 65536 00:30:25.509 }, 00:30:25.509 { 00:30:25.509 "name": "BaseBdev2", 00:30:25.509 "uuid": "996cc497-3d91-571a-ba75-d49aeeaaf8a6", 00:30:25.509 "is_configured": true, 00:30:25.509 "data_offset": 0, 00:30:25.509 "data_size": 65536 00:30:25.509 }, 00:30:25.509 { 00:30:25.509 "name": "BaseBdev3", 00:30:25.509 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:25.509 "is_configured": true, 00:30:25.509 "data_offset": 0, 00:30:25.509 "data_size": 65536 00:30:25.509 }, 00:30:25.509 { 00:30:25.509 "name": "BaseBdev4", 00:30:25.509 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:25.509 "is_configured": true, 00:30:25.509 "data_offset": 0, 00:30:25.509 "data_size": 65536 00:30:25.509 } 00:30:25.509 ] 00:30:25.509 }' 00:30:25.509 11:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:25.509 11:42:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:26.443 11:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:26.443 11:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:26.443 11:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:26.443 11:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:26.443 11:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:26.443 11:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.443 11:42:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.701 11:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:26.701 "name": "raid_bdev1", 00:30:26.701 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:26.701 "strip_size_kb": 0, 00:30:26.701 "state": "online", 00:30:26.701 "raid_level": "raid1", 00:30:26.701 "superblock": false, 00:30:26.701 "num_base_bdevs": 4, 00:30:26.701 "num_base_bdevs_discovered": 3, 00:30:26.701 "num_base_bdevs_operational": 3, 00:30:26.701 "base_bdevs_list": [ 00:30:26.701 { 00:30:26.701 "name": null, 00:30:26.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.701 "is_configured": false, 00:30:26.701 "data_offset": 0, 00:30:26.701 "data_size": 65536 00:30:26.701 }, 00:30:26.701 { 00:30:26.701 "name": "BaseBdev2", 00:30:26.701 "uuid": "996cc497-3d91-571a-ba75-d49aeeaaf8a6", 00:30:26.701 "is_configured": true, 00:30:26.701 "data_offset": 0, 00:30:26.701 "data_size": 65536 00:30:26.701 }, 00:30:26.701 { 00:30:26.701 "name": "BaseBdev3", 00:30:26.701 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:26.702 "is_configured": true, 00:30:26.702 "data_offset": 0, 00:30:26.702 "data_size": 65536 00:30:26.702 }, 00:30:26.702 { 00:30:26.702 "name": "BaseBdev4", 00:30:26.702 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:26.702 "is_configured": true, 00:30:26.702 "data_offset": 0, 00:30:26.702 "data_size": 65536 00:30:26.702 } 00:30:26.702 ] 00:30:26.702 }' 00:30:26.702 11:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:26.702 11:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:26.702 11:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:26.702 11:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:26.702 11:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:26.960 [2024-07-13 11:42:01.560238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:26.960 11:42:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:26.960 [2024-07-13 11:42:01.605123] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:26.960 [2024-07-13 11:42:01.607549] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:27.218 [2024-07-13 11:42:01.738767] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:27.218 [2024-07-13 11:42:01.739661] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:27.218 [2024-07-13 11:42:01.958983] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:27.218 [2024-07-13 11:42:01.959721] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:27.476 [2024-07-13 11:42:02.207965] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:27.476 [2024-07-13 11:42:02.209645] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:27.733 [2024-07-13 11:42:02.429991] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:27.991 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:27.991 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:27.991 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:27.991 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:27.991 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:27.991 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.991 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.249 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:28.249 "name": "raid_bdev1", 00:30:28.249 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:28.249 "strip_size_kb": 0, 00:30:28.249 "state": "online", 00:30:28.249 "raid_level": "raid1", 00:30:28.249 "superblock": false, 00:30:28.249 "num_base_bdevs": 4, 00:30:28.249 "num_base_bdevs_discovered": 4, 00:30:28.249 "num_base_bdevs_operational": 4, 00:30:28.249 "process": { 00:30:28.249 "type": "rebuild", 00:30:28.249 "target": "spare", 00:30:28.249 "progress": { 00:30:28.249 "blocks": 16384, 00:30:28.249 "percent": 25 00:30:28.249 } 00:30:28.249 }, 00:30:28.249 "base_bdevs_list": [ 00:30:28.249 { 00:30:28.249 "name": "spare", 00:30:28.249 "uuid": "554a3f11-d38d-5188-abab-f6eae001a921", 00:30:28.249 "is_configured": true, 00:30:28.249 "data_offset": 0, 00:30:28.249 "data_size": 65536 00:30:28.249 }, 00:30:28.249 { 00:30:28.249 "name": "BaseBdev2", 00:30:28.249 "uuid": "996cc497-3d91-571a-ba75-d49aeeaaf8a6", 00:30:28.249 "is_configured": true, 00:30:28.249 "data_offset": 0, 00:30:28.249 "data_size": 65536 00:30:28.249 }, 00:30:28.249 { 00:30:28.249 "name": "BaseBdev3", 00:30:28.249 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:28.249 "is_configured": true, 00:30:28.249 "data_offset": 0, 00:30:28.249 "data_size": 65536 00:30:28.249 }, 00:30:28.249 { 00:30:28.249 "name": "BaseBdev4", 00:30:28.249 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:28.249 "is_configured": true, 00:30:28.249 "data_offset": 0, 00:30:28.249 "data_size": 65536 00:30:28.249 } 00:30:28.249 ] 00:30:28.249 }' 00:30:28.249 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:28.250 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:28.250 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:28.250 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:28.250 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:30:28.250 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:28.250 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:28.250 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:28.250 11:42:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:28.507 [2024-07-13 11:42:03.119591] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:28.507 [2024-07-13 11:42:03.139290] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:28.765 [2024-07-13 11:42:03.348749] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:30:28.765 [2024-07-13 11:42:03.348968] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:30:28.765 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:28.765 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:28.765 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:28.765 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:28.765 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:28.765 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:28.765 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:28.765 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.765 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.023 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:29.023 "name": "raid_bdev1", 00:30:29.023 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:29.023 "strip_size_kb": 0, 00:30:29.023 "state": "online", 00:30:29.023 "raid_level": "raid1", 00:30:29.023 "superblock": false, 00:30:29.023 "num_base_bdevs": 4, 00:30:29.023 "num_base_bdevs_discovered": 3, 00:30:29.023 "num_base_bdevs_operational": 3, 00:30:29.023 "process": { 00:30:29.023 "type": "rebuild", 00:30:29.023 "target": "spare", 00:30:29.023 "progress": { 00:30:29.023 "blocks": 28672, 00:30:29.023 "percent": 43 00:30:29.023 } 00:30:29.023 }, 00:30:29.023 "base_bdevs_list": [ 00:30:29.023 { 00:30:29.023 "name": "spare", 00:30:29.023 "uuid": "554a3f11-d38d-5188-abab-f6eae001a921", 00:30:29.023 "is_configured": true, 00:30:29.023 "data_offset": 0, 00:30:29.023 "data_size": 65536 00:30:29.023 }, 00:30:29.023 { 00:30:29.023 "name": null, 00:30:29.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.023 "is_configured": false, 00:30:29.023 "data_offset": 0, 00:30:29.023 "data_size": 65536 00:30:29.023 }, 00:30:29.023 { 00:30:29.023 "name": "BaseBdev3", 00:30:29.023 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:29.023 "is_configured": true, 00:30:29.023 "data_offset": 0, 00:30:29.023 "data_size": 65536 00:30:29.023 }, 00:30:29.023 { 00:30:29.023 "name": "BaseBdev4", 00:30:29.023 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:29.023 "is_configured": true, 00:30:29.023 "data_offset": 0, 00:30:29.023 "data_size": 65536 00:30:29.023 } 00:30:29.023 ] 00:30:29.023 }' 00:30:29.023 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:29.023 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:29.023 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=974 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.024 11:42:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.281 [2024-07-13 11:42:03.828323] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:30:29.281 11:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:29.281 "name": "raid_bdev1", 00:30:29.281 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:29.281 "strip_size_kb": 0, 00:30:29.281 "state": "online", 00:30:29.281 "raid_level": "raid1", 00:30:29.281 "superblock": false, 00:30:29.281 "num_base_bdevs": 4, 00:30:29.281 "num_base_bdevs_discovered": 3, 00:30:29.281 "num_base_bdevs_operational": 3, 00:30:29.281 "process": { 00:30:29.281 "type": "rebuild", 00:30:29.281 "target": "spare", 00:30:29.281 "progress": { 00:30:29.281 "blocks": 34816, 00:30:29.281 "percent": 53 00:30:29.281 } 00:30:29.281 }, 00:30:29.281 "base_bdevs_list": [ 00:30:29.281 { 00:30:29.281 "name": "spare", 00:30:29.281 "uuid": "554a3f11-d38d-5188-abab-f6eae001a921", 00:30:29.281 "is_configured": true, 00:30:29.281 "data_offset": 0, 00:30:29.281 "data_size": 65536 00:30:29.281 }, 00:30:29.281 { 00:30:29.281 "name": null, 00:30:29.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.281 "is_configured": false, 00:30:29.281 "data_offset": 0, 00:30:29.281 "data_size": 65536 00:30:29.281 }, 00:30:29.281 { 00:30:29.281 "name": "BaseBdev3", 00:30:29.281 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:29.281 "is_configured": true, 00:30:29.281 "data_offset": 0, 00:30:29.281 "data_size": 65536 00:30:29.281 }, 00:30:29.281 { 00:30:29.281 "name": "BaseBdev4", 00:30:29.281 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:29.281 "is_configured": true, 00:30:29.281 "data_offset": 0, 00:30:29.281 "data_size": 65536 00:30:29.281 } 00:30:29.281 ] 00:30:29.281 }' 00:30:29.281 11:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:29.539 11:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:29.539 11:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:29.539 11:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:29.539 11:42:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:29.539 [2024-07-13 11:42:04.151827] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:30.105 [2024-07-13 11:42:04.590608] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:30:30.105 [2024-07-13 11:42:04.822386] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:30:30.363 [2024-07-13 11:42:04.946088] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:30:30.622 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:30.622 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:30.622 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:30.622 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:30.622 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:30.622 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:30.622 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.622 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.881 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:30.881 "name": "raid_bdev1", 00:30:30.881 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:30.881 "strip_size_kb": 0, 00:30:30.881 "state": "online", 00:30:30.881 "raid_level": "raid1", 00:30:30.881 "superblock": false, 00:30:30.881 "num_base_bdevs": 4, 00:30:30.881 "num_base_bdevs_discovered": 3, 00:30:30.881 "num_base_bdevs_operational": 3, 00:30:30.881 "process": { 00:30:30.881 "type": "rebuild", 00:30:30.881 "target": "spare", 00:30:30.881 "progress": { 00:30:30.881 "blocks": 59392, 00:30:30.881 "percent": 90 00:30:30.881 } 00:30:30.881 }, 00:30:30.881 "base_bdevs_list": [ 00:30:30.881 { 00:30:30.881 "name": "spare", 00:30:30.881 "uuid": "554a3f11-d38d-5188-abab-f6eae001a921", 00:30:30.881 "is_configured": true, 00:30:30.881 "data_offset": 0, 00:30:30.881 "data_size": 65536 00:30:30.881 }, 00:30:30.881 { 00:30:30.881 "name": null, 00:30:30.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.881 "is_configured": false, 00:30:30.881 "data_offset": 0, 00:30:30.881 "data_size": 65536 00:30:30.881 }, 00:30:30.881 { 00:30:30.881 "name": "BaseBdev3", 00:30:30.881 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:30.881 "is_configured": true, 00:30:30.881 "data_offset": 0, 00:30:30.881 "data_size": 65536 00:30:30.881 }, 00:30:30.881 { 00:30:30.881 "name": "BaseBdev4", 00:30:30.881 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:30.881 "is_configured": true, 00:30:30.881 "data_offset": 0, 00:30:30.881 "data_size": 65536 00:30:30.881 } 00:30:30.881 ] 00:30:30.881 }' 00:30:30.881 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:30.881 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:30.881 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:30.881 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:30.881 11:42:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:30.881 [2024-07-13 11:42:05.610804] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:31.139 [2024-07-13 11:42:05.710792] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:31.139 [2024-07-13 11:42:05.712540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.073 "name": "raid_bdev1", 00:30:32.073 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:32.073 "strip_size_kb": 0, 00:30:32.073 "state": "online", 00:30:32.073 "raid_level": "raid1", 00:30:32.073 "superblock": false, 00:30:32.073 "num_base_bdevs": 4, 00:30:32.073 "num_base_bdevs_discovered": 3, 00:30:32.073 "num_base_bdevs_operational": 3, 00:30:32.073 "base_bdevs_list": [ 00:30:32.073 { 00:30:32.073 "name": "spare", 00:30:32.073 "uuid": "554a3f11-d38d-5188-abab-f6eae001a921", 00:30:32.073 "is_configured": true, 00:30:32.073 "data_offset": 0, 00:30:32.073 "data_size": 65536 00:30:32.073 }, 00:30:32.073 { 00:30:32.073 "name": null, 00:30:32.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.073 "is_configured": false, 00:30:32.073 "data_offset": 0, 00:30:32.073 "data_size": 65536 00:30:32.073 }, 00:30:32.073 { 00:30:32.073 "name": "BaseBdev3", 00:30:32.073 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:32.073 "is_configured": true, 00:30:32.073 "data_offset": 0, 00:30:32.073 "data_size": 65536 00:30:32.073 }, 00:30:32.073 { 00:30:32.073 "name": "BaseBdev4", 00:30:32.073 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:32.073 "is_configured": true, 00:30:32.073 "data_offset": 0, 00:30:32.073 "data_size": 65536 00:30:32.073 } 00:30:32.073 ] 00:30:32.073 }' 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:32.073 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:32.331 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:32.331 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:30:32.331 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:32.331 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:32.331 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:32.331 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:32.331 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:32.331 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.331 11:42:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.589 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:32.589 "name": "raid_bdev1", 00:30:32.589 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:32.589 "strip_size_kb": 0, 00:30:32.589 "state": "online", 00:30:32.589 "raid_level": "raid1", 00:30:32.589 "superblock": false, 00:30:32.589 "num_base_bdevs": 4, 00:30:32.589 "num_base_bdevs_discovered": 3, 00:30:32.589 "num_base_bdevs_operational": 3, 00:30:32.589 "base_bdevs_list": [ 00:30:32.589 { 00:30:32.589 "name": "spare", 00:30:32.589 "uuid": "554a3f11-d38d-5188-abab-f6eae001a921", 00:30:32.589 "is_configured": true, 00:30:32.589 "data_offset": 0, 00:30:32.589 "data_size": 65536 00:30:32.589 }, 00:30:32.589 { 00:30:32.589 "name": null, 00:30:32.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.589 "is_configured": false, 00:30:32.589 "data_offset": 0, 00:30:32.589 "data_size": 65536 00:30:32.589 }, 00:30:32.589 { 00:30:32.589 "name": "BaseBdev3", 00:30:32.589 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:32.589 "is_configured": true, 00:30:32.589 "data_offset": 0, 00:30:32.589 "data_size": 65536 00:30:32.589 }, 00:30:32.589 { 00:30:32.589 "name": "BaseBdev4", 00:30:32.589 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:32.589 "is_configured": true, 00:30:32.589 "data_offset": 0, 00:30:32.589 "data_size": 65536 00:30:32.589 } 00:30:32.589 ] 00:30:32.589 }' 00:30:32.589 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:32.589 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:32.589 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:32.589 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:32.589 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:32.589 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:32.589 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:32.589 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:32.590 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:32.590 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:32.590 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:32.590 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:32.590 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:32.590 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:32.590 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:32.590 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:32.848 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:32.848 "name": "raid_bdev1", 00:30:32.848 "uuid": "1d7b46db-a232-4770-9c70-3d2b84b643bf", 00:30:32.848 "strip_size_kb": 0, 00:30:32.848 "state": "online", 00:30:32.848 "raid_level": "raid1", 00:30:32.848 "superblock": false, 00:30:32.848 "num_base_bdevs": 4, 00:30:32.848 "num_base_bdevs_discovered": 3, 00:30:32.848 "num_base_bdevs_operational": 3, 00:30:32.848 "base_bdevs_list": [ 00:30:32.848 { 00:30:32.848 "name": "spare", 00:30:32.848 "uuid": "554a3f11-d38d-5188-abab-f6eae001a921", 00:30:32.848 "is_configured": true, 00:30:32.848 "data_offset": 0, 00:30:32.848 "data_size": 65536 00:30:32.848 }, 00:30:32.848 { 00:30:32.848 "name": null, 00:30:32.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:32.848 "is_configured": false, 00:30:32.848 "data_offset": 0, 00:30:32.848 "data_size": 65536 00:30:32.848 }, 00:30:32.848 { 00:30:32.848 "name": "BaseBdev3", 00:30:32.848 "uuid": "cf0c01a5-12bb-570d-b2bb-56496eabb0fd", 00:30:32.848 "is_configured": true, 00:30:32.848 "data_offset": 0, 00:30:32.848 "data_size": 65536 00:30:32.848 }, 00:30:32.848 { 00:30:32.848 "name": "BaseBdev4", 00:30:32.848 "uuid": "7e44a545-dbee-56bf-8e11-11fbf77d567a", 00:30:32.848 "is_configured": true, 00:30:32.848 "data_offset": 0, 00:30:32.848 "data_size": 65536 00:30:32.848 } 00:30:32.848 ] 00:30:32.848 }' 00:30:32.848 11:42:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:32.848 11:42:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.416 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:33.673 [2024-07-13 11:42:08.395219] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:33.673 [2024-07-13 11:42:08.395531] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:33.931 00:30:33.931 Latency(us) 00:30:33.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:33.931 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:33.931 raid_bdev1 : 11.45 111.30 333.89 0.00 0.00 13127.11 309.06 113913.48 00:30:33.931 =================================================================================================================== 00:30:33.931 Total : 111.30 333.89 0.00 0.00 13127.11 309.06 113913.48 00:30:33.931 [2024-07-13 11:42:08.446197] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:33.931 [2024-07-13 11:42:08.446350] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:33.931 0 00:30:33.931 [2024-07-13 11:42:08.446489] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:33.931 [2024-07-13 11:42:08.446504] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:30:33.931 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:33.931 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:34.189 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:34.446 /dev/nbd0 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:34.446 1+0 records in 00:30:34.446 1+0 records out 00:30:34.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435745 s, 9.4 MB/s 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:34.446 11:42:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:34.446 /dev/nbd1 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:34.704 1+0 records in 00:30:34.704 1+0 records out 00:30:34.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428449 s, 9.6 MB/s 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:34.704 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:34.961 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:35.219 /dev/nbd1 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:35.219 1+0 records in 00:30:35.219 1+0 records out 00:30:35.219 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535048 s, 7.7 MB/s 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:35.219 11:42:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:35.476 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:35.733 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 149430 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 149430 ']' 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 149430 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149430 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149430' 00:30:35.991 killing process with pid 149430 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 149430 00:30:35.991 Received shutdown signal, test time was about 13.727551 seconds 00:30:35.991 00:30:35.991 Latency(us) 00:30:35.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.991 =================================================================================================================== 00:30:35.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:35.991 11:42:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 149430 00:30:35.991 [2024-07-13 11:42:10.711908] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:36.558 [2024-07-13 11:42:11.003945] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:37.494 ************************************ 00:30:37.494 END TEST raid_rebuild_test_io 00:30:37.494 ************************************ 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:30:37.494 00:30:37.494 real 0m20.127s 00:30:37.494 user 0m31.599s 00:30:37.494 sys 0m2.199s 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:37.494 11:42:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:37.494 11:42:12 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:30:37.494 11:42:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:37.494 11:42:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:37.494 11:42:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:37.494 ************************************ 00:30:37.494 START TEST raid_rebuild_test_sb_io 00:30:37.494 ************************************ 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.494 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:37.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=149990 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 149990 /var/tmp/spdk-raid.sock 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 149990 ']' 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:37.495 11:42:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:37.495 [2024-07-13 11:42:12.192955] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:30:37.495 [2024-07-13 11:42:12.193378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149990 ] 00:30:37.495 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:37.495 Zero copy mechanism will not be used. 00:30:37.754 [2024-07-13 11:42:12.348370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.012 [2024-07-13 11:42:12.532849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.012 [2024-07-13 11:42:12.717573] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:38.614 11:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:38.614 11:42:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:30:38.614 11:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:38.614 11:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:38.898 BaseBdev1_malloc 00:30:38.898 11:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:38.898 [2024-07-13 11:42:13.542210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:38.898 [2024-07-13 11:42:13.542548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:38.898 [2024-07-13 11:42:13.542696] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:30:38.898 [2024-07-13 11:42:13.542815] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:38.898 [2024-07-13 11:42:13.545073] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:38.898 [2024-07-13 11:42:13.545238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:38.898 BaseBdev1 00:30:38.898 11:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:38.898 11:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:39.156 BaseBdev2_malloc 00:30:39.156 11:42:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:39.413 [2024-07-13 11:42:14.011571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:39.413 [2024-07-13 11:42:14.011831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:39.413 [2024-07-13 11:42:14.011901] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:30:39.413 [2024-07-13 11:42:14.012180] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:39.413 [2024-07-13 11:42:14.014552] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:39.413 [2024-07-13 11:42:14.014713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:39.413 BaseBdev2 00:30:39.414 11:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:39.414 11:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:39.671 BaseBdev3_malloc 00:30:39.671 11:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:39.929 [2024-07-13 11:42:14.484298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:39.929 [2024-07-13 11:42:14.484518] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:39.929 [2024-07-13 11:42:14.484583] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:30:39.929 [2024-07-13 11:42:14.484848] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:39.929 [2024-07-13 11:42:14.487090] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:39.929 [2024-07-13 11:42:14.487264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:39.929 BaseBdev3 00:30:39.929 11:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:39.929 11:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:40.187 BaseBdev4_malloc 00:30:40.187 11:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:40.444 [2024-07-13 11:42:14.945800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:40.444 [2024-07-13 11:42:14.946042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:40.444 [2024-07-13 11:42:14.946124] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:40.444 [2024-07-13 11:42:14.946414] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:40.444 [2024-07-13 11:42:14.948637] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:40.444 [2024-07-13 11:42:14.948805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:40.444 BaseBdev4 00:30:40.444 11:42:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:40.444 spare_malloc 00:30:40.444 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:40.702 spare_delay 00:30:40.702 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:40.960 [2024-07-13 11:42:15.630503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:40.960 [2024-07-13 11:42:15.630723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:40.960 [2024-07-13 11:42:15.630786] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:30:40.960 [2024-07-13 11:42:15.631052] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:40.960 [2024-07-13 11:42:15.633311] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:40.960 [2024-07-13 11:42:15.633480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:40.960 spare 00:30:40.960 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:41.218 [2024-07-13 11:42:15.874606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:41.218 [2024-07-13 11:42:15.876664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:41.218 [2024-07-13 11:42:15.876865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:41.218 [2024-07-13 11:42:15.876959] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:41.218 [2024-07-13 11:42:15.877277] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:30:41.218 [2024-07-13 11:42:15.877320] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:41.218 [2024-07-13 11:42:15.877541] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:30:41.218 [2024-07-13 11:42:15.878017] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:30:41.218 [2024-07-13 11:42:15.878143] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:30:41.218 [2024-07-13 11:42:15.878353] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.218 11:42:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.477 11:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:41.477 "name": "raid_bdev1", 00:30:41.477 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:41.477 "strip_size_kb": 0, 00:30:41.477 "state": "online", 00:30:41.477 "raid_level": "raid1", 00:30:41.477 "superblock": true, 00:30:41.477 "num_base_bdevs": 4, 00:30:41.477 "num_base_bdevs_discovered": 4, 00:30:41.477 "num_base_bdevs_operational": 4, 00:30:41.477 "base_bdevs_list": [ 00:30:41.477 { 00:30:41.477 "name": "BaseBdev1", 00:30:41.477 "uuid": "3c8d6ef2-fac4-524f-bde2-d01eb90c1bc1", 00:30:41.477 "is_configured": true, 00:30:41.477 "data_offset": 2048, 00:30:41.477 "data_size": 63488 00:30:41.477 }, 00:30:41.477 { 00:30:41.477 "name": "BaseBdev2", 00:30:41.477 "uuid": "84a45b06-4491-57ff-89b3-4ab975149211", 00:30:41.477 "is_configured": true, 00:30:41.477 "data_offset": 2048, 00:30:41.477 "data_size": 63488 00:30:41.477 }, 00:30:41.477 { 00:30:41.477 "name": "BaseBdev3", 00:30:41.477 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:41.477 "is_configured": true, 00:30:41.477 "data_offset": 2048, 00:30:41.477 "data_size": 63488 00:30:41.477 }, 00:30:41.477 { 00:30:41.477 "name": "BaseBdev4", 00:30:41.477 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:41.477 "is_configured": true, 00:30:41.477 "data_offset": 2048, 00:30:41.477 "data_size": 63488 00:30:41.477 } 00:30:41.477 ] 00:30:41.477 }' 00:30:41.477 11:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:41.477 11:42:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:42.044 11:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:42.044 11:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:42.303 [2024-07-13 11:42:16.919019] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:42.303 11:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:30:42.303 11:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.303 11:42:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:42.562 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:30:42.562 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:30:42.562 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:42.562 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:42.562 [2024-07-13 11:42:17.238434] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:42.562 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:42.562 Zero copy mechanism will not be used. 00:30:42.562 Running I/O for 60 seconds... 00:30:42.562 [2024-07-13 11:42:17.306748] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:42.820 [2024-07-13 11:42:17.319638] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.820 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.079 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:43.079 "name": "raid_bdev1", 00:30:43.079 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:43.079 "strip_size_kb": 0, 00:30:43.079 "state": "online", 00:30:43.079 "raid_level": "raid1", 00:30:43.079 "superblock": true, 00:30:43.079 "num_base_bdevs": 4, 00:30:43.079 "num_base_bdevs_discovered": 3, 00:30:43.079 "num_base_bdevs_operational": 3, 00:30:43.079 "base_bdevs_list": [ 00:30:43.079 { 00:30:43.079 "name": null, 00:30:43.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.079 "is_configured": false, 00:30:43.079 "data_offset": 2048, 00:30:43.079 "data_size": 63488 00:30:43.079 }, 00:30:43.079 { 00:30:43.079 "name": "BaseBdev2", 00:30:43.079 "uuid": "84a45b06-4491-57ff-89b3-4ab975149211", 00:30:43.079 "is_configured": true, 00:30:43.079 "data_offset": 2048, 00:30:43.079 "data_size": 63488 00:30:43.079 }, 00:30:43.079 { 00:30:43.079 "name": "BaseBdev3", 00:30:43.079 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:43.079 "is_configured": true, 00:30:43.079 "data_offset": 2048, 00:30:43.079 "data_size": 63488 00:30:43.079 }, 00:30:43.079 { 00:30:43.079 "name": "BaseBdev4", 00:30:43.079 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:43.079 "is_configured": true, 00:30:43.079 "data_offset": 2048, 00:30:43.079 "data_size": 63488 00:30:43.079 } 00:30:43.079 ] 00:30:43.079 }' 00:30:43.079 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:43.079 11:42:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:43.646 11:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:43.904 [2024-07-13 11:42:18.559262] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:43.904 11:42:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:43.904 [2024-07-13 11:42:18.603732] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:30:43.904 [2024-07-13 11:42:18.606003] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:44.162 [2024-07-13 11:42:18.723777] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:44.162 [2024-07-13 11:42:18.724437] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:44.421 [2024-07-13 11:42:18.936770] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:44.421 [2024-07-13 11:42:18.937410] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:44.679 [2024-07-13 11:42:19.174026] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:44.937 [2024-07-13 11:42:19.519404] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:44.937 [2024-07-13 11:42:19.519988] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:44.937 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:44.937 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:44.937 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:44.937 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:44.937 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:44.937 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:44.937 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.937 [2024-07-13 11:42:19.652315] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:45.195 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.195 "name": "raid_bdev1", 00:30:45.195 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:45.195 "strip_size_kb": 0, 00:30:45.195 "state": "online", 00:30:45.195 "raid_level": "raid1", 00:30:45.195 "superblock": true, 00:30:45.195 "num_base_bdevs": 4, 00:30:45.195 "num_base_bdevs_discovered": 4, 00:30:45.195 "num_base_bdevs_operational": 4, 00:30:45.195 "process": { 00:30:45.195 "type": "rebuild", 00:30:45.195 "target": "spare", 00:30:45.195 "progress": { 00:30:45.195 "blocks": 16384, 00:30:45.195 "percent": 25 00:30:45.195 } 00:30:45.195 }, 00:30:45.195 "base_bdevs_list": [ 00:30:45.195 { 00:30:45.195 "name": "spare", 00:30:45.195 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:45.195 "is_configured": true, 00:30:45.195 "data_offset": 2048, 00:30:45.195 "data_size": 63488 00:30:45.195 }, 00:30:45.195 { 00:30:45.195 "name": "BaseBdev2", 00:30:45.195 "uuid": "84a45b06-4491-57ff-89b3-4ab975149211", 00:30:45.195 "is_configured": true, 00:30:45.195 "data_offset": 2048, 00:30:45.195 "data_size": 63488 00:30:45.195 }, 00:30:45.195 { 00:30:45.195 "name": "BaseBdev3", 00:30:45.195 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:45.195 "is_configured": true, 00:30:45.195 "data_offset": 2048, 00:30:45.195 "data_size": 63488 00:30:45.195 }, 00:30:45.195 { 00:30:45.195 "name": "BaseBdev4", 00:30:45.195 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:45.195 "is_configured": true, 00:30:45.195 "data_offset": 2048, 00:30:45.195 "data_size": 63488 00:30:45.195 } 00:30:45.195 ] 00:30:45.195 }' 00:30:45.195 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:45.195 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:45.195 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:45.454 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:45.454 11:42:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:45.454 [2024-07-13 11:42:20.176581] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:45.712 [2024-07-13 11:42:20.252523] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:45.712 [2024-07-13 11:42:20.263562] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:45.712 [2024-07-13 11:42:20.263706] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:45.712 [2024-07-13 11:42:20.263745] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:45.712 [2024-07-13 11:42:20.296696] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:30:45.712 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:45.712 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:45.712 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:45.712 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:45.712 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:45.712 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:45.712 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:45.712 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:45.712 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:45.713 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:45.713 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.713 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.971 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:45.971 "name": "raid_bdev1", 00:30:45.971 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:45.971 "strip_size_kb": 0, 00:30:45.971 "state": "online", 00:30:45.971 "raid_level": "raid1", 00:30:45.971 "superblock": true, 00:30:45.971 "num_base_bdevs": 4, 00:30:45.971 "num_base_bdevs_discovered": 3, 00:30:45.971 "num_base_bdevs_operational": 3, 00:30:45.971 "base_bdevs_list": [ 00:30:45.971 { 00:30:45.971 "name": null, 00:30:45.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.971 "is_configured": false, 00:30:45.971 "data_offset": 2048, 00:30:45.971 "data_size": 63488 00:30:45.971 }, 00:30:45.971 { 00:30:45.971 "name": "BaseBdev2", 00:30:45.971 "uuid": "84a45b06-4491-57ff-89b3-4ab975149211", 00:30:45.971 "is_configured": true, 00:30:45.971 "data_offset": 2048, 00:30:45.971 "data_size": 63488 00:30:45.971 }, 00:30:45.971 { 00:30:45.971 "name": "BaseBdev3", 00:30:45.971 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:45.971 "is_configured": true, 00:30:45.971 "data_offset": 2048, 00:30:45.971 "data_size": 63488 00:30:45.971 }, 00:30:45.971 { 00:30:45.971 "name": "BaseBdev4", 00:30:45.971 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:45.971 "is_configured": true, 00:30:45.971 "data_offset": 2048, 00:30:45.971 "data_size": 63488 00:30:45.971 } 00:30:45.971 ] 00:30:45.971 }' 00:30:45.971 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:45.971 11:42:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:46.538 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:46.538 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:46.538 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:46.538 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:46.538 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:46.538 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.538 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.797 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:46.797 "name": "raid_bdev1", 00:30:46.797 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:46.797 "strip_size_kb": 0, 00:30:46.797 "state": "online", 00:30:46.797 "raid_level": "raid1", 00:30:46.797 "superblock": true, 00:30:46.797 "num_base_bdevs": 4, 00:30:46.797 "num_base_bdevs_discovered": 3, 00:30:46.797 "num_base_bdevs_operational": 3, 00:30:46.797 "base_bdevs_list": [ 00:30:46.797 { 00:30:46.797 "name": null, 00:30:46.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.797 "is_configured": false, 00:30:46.797 "data_offset": 2048, 00:30:46.797 "data_size": 63488 00:30:46.797 }, 00:30:46.797 { 00:30:46.797 "name": "BaseBdev2", 00:30:46.797 "uuid": "84a45b06-4491-57ff-89b3-4ab975149211", 00:30:46.797 "is_configured": true, 00:30:46.797 "data_offset": 2048, 00:30:46.797 "data_size": 63488 00:30:46.797 }, 00:30:46.797 { 00:30:46.797 "name": "BaseBdev3", 00:30:46.797 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:46.797 "is_configured": true, 00:30:46.797 "data_offset": 2048, 00:30:46.797 "data_size": 63488 00:30:46.797 }, 00:30:46.797 { 00:30:46.797 "name": "BaseBdev4", 00:30:46.797 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:46.797 "is_configured": true, 00:30:46.797 "data_offset": 2048, 00:30:46.797 "data_size": 63488 00:30:46.797 } 00:30:46.797 ] 00:30:46.797 }' 00:30:46.797 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:47.056 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:47.056 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:47.056 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:47.056 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:47.315 [2024-07-13 11:42:21.820411] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:47.315 11:42:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:47.315 [2024-07-13 11:42:21.884221] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:47.315 [2024-07-13 11:42:21.886515] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:47.315 [2024-07-13 11:42:21.989146] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:47.315 [2024-07-13 11:42:21.989811] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:47.573 [2024-07-13 11:42:22.119199] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:47.573 [2024-07-13 11:42:22.119962] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:48.156 [2024-07-13 11:42:22.619079] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:48.156 [2024-07-13 11:42:22.620125] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:48.156 11:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:48.156 11:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:48.156 11:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:48.156 11:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:48.157 11:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:48.157 11:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.157 11:42:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.415 [2024-07-13 11:42:23.041591] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:48.415 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.415 "name": "raid_bdev1", 00:30:48.415 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:48.415 "strip_size_kb": 0, 00:30:48.415 "state": "online", 00:30:48.415 "raid_level": "raid1", 00:30:48.415 "superblock": true, 00:30:48.415 "num_base_bdevs": 4, 00:30:48.415 "num_base_bdevs_discovered": 4, 00:30:48.415 "num_base_bdevs_operational": 4, 00:30:48.415 "process": { 00:30:48.415 "type": "rebuild", 00:30:48.415 "target": "spare", 00:30:48.415 "progress": { 00:30:48.415 "blocks": 16384, 00:30:48.415 "percent": 25 00:30:48.415 } 00:30:48.415 }, 00:30:48.415 "base_bdevs_list": [ 00:30:48.415 { 00:30:48.415 "name": "spare", 00:30:48.415 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:48.415 "is_configured": true, 00:30:48.415 "data_offset": 2048, 00:30:48.415 "data_size": 63488 00:30:48.415 }, 00:30:48.415 { 00:30:48.415 "name": "BaseBdev2", 00:30:48.415 "uuid": "84a45b06-4491-57ff-89b3-4ab975149211", 00:30:48.415 "is_configured": true, 00:30:48.415 "data_offset": 2048, 00:30:48.415 "data_size": 63488 00:30:48.415 }, 00:30:48.415 { 00:30:48.415 "name": "BaseBdev3", 00:30:48.415 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:48.415 "is_configured": true, 00:30:48.415 "data_offset": 2048, 00:30:48.415 "data_size": 63488 00:30:48.415 }, 00:30:48.415 { 00:30:48.415 "name": "BaseBdev4", 00:30:48.415 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:48.415 "is_configured": true, 00:30:48.415 "data_offset": 2048, 00:30:48.415 "data_size": 63488 00:30:48.415 } 00:30:48.415 ] 00:30:48.415 }' 00:30:48.415 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:48.415 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:48.415 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:48.673 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:48.673 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:48.673 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:48.673 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:48.673 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:48.673 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:48.673 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:48.673 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:48.930 [2024-07-13 11:42:23.432338] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:48.930 [2024-07-13 11:42:23.507095] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:48.930 [2024-07-13 11:42:23.658515] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006080 00:30:48.930 [2024-07-13 11:42:23.658659] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:30:48.931 [2024-07-13 11:42:23.660196] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:48.931 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:48.931 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:48.931 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:48.931 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:48.931 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:48.931 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:48.931 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:48.931 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.931 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.188 [2024-07-13 11:42:23.891794] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:49.447 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:49.447 "name": "raid_bdev1", 00:30:49.447 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:49.447 "strip_size_kb": 0, 00:30:49.447 "state": "online", 00:30:49.447 "raid_level": "raid1", 00:30:49.447 "superblock": true, 00:30:49.447 "num_base_bdevs": 4, 00:30:49.447 "num_base_bdevs_discovered": 3, 00:30:49.447 "num_base_bdevs_operational": 3, 00:30:49.447 "process": { 00:30:49.447 "type": "rebuild", 00:30:49.447 "target": "spare", 00:30:49.447 "progress": { 00:30:49.447 "blocks": 26624, 00:30:49.447 "percent": 41 00:30:49.447 } 00:30:49.447 }, 00:30:49.447 "base_bdevs_list": [ 00:30:49.447 { 00:30:49.447 "name": "spare", 00:30:49.447 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:49.447 "is_configured": true, 00:30:49.447 "data_offset": 2048, 00:30:49.447 "data_size": 63488 00:30:49.447 }, 00:30:49.447 { 00:30:49.447 "name": null, 00:30:49.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.447 "is_configured": false, 00:30:49.447 "data_offset": 2048, 00:30:49.447 "data_size": 63488 00:30:49.447 }, 00:30:49.447 { 00:30:49.447 "name": "BaseBdev3", 00:30:49.447 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:49.447 "is_configured": true, 00:30:49.447 "data_offset": 2048, 00:30:49.447 "data_size": 63488 00:30:49.447 }, 00:30:49.447 { 00:30:49.447 "name": "BaseBdev4", 00:30:49.447 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:49.447 "is_configured": true, 00:30:49.447 "data_offset": 2048, 00:30:49.447 "data_size": 63488 00:30:49.447 } 00:30:49.447 ] 00:30:49.447 }' 00:30:49.447 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:49.447 11:42:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=995 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:49.447 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.706 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:49.706 "name": "raid_bdev1", 00:30:49.706 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:49.706 "strip_size_kb": 0, 00:30:49.706 "state": "online", 00:30:49.706 "raid_level": "raid1", 00:30:49.706 "superblock": true, 00:30:49.706 "num_base_bdevs": 4, 00:30:49.706 "num_base_bdevs_discovered": 3, 00:30:49.706 "num_base_bdevs_operational": 3, 00:30:49.706 "process": { 00:30:49.706 "type": "rebuild", 00:30:49.706 "target": "spare", 00:30:49.706 "progress": { 00:30:49.706 "blocks": 30720, 00:30:49.706 "percent": 48 00:30:49.706 } 00:30:49.706 }, 00:30:49.706 "base_bdevs_list": [ 00:30:49.706 { 00:30:49.706 "name": "spare", 00:30:49.706 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:49.706 "is_configured": true, 00:30:49.706 "data_offset": 2048, 00:30:49.706 "data_size": 63488 00:30:49.706 }, 00:30:49.706 { 00:30:49.706 "name": null, 00:30:49.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.706 "is_configured": false, 00:30:49.706 "data_offset": 2048, 00:30:49.706 "data_size": 63488 00:30:49.706 }, 00:30:49.706 { 00:30:49.706 "name": "BaseBdev3", 00:30:49.706 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:49.706 "is_configured": true, 00:30:49.706 "data_offset": 2048, 00:30:49.706 "data_size": 63488 00:30:49.706 }, 00:30:49.706 { 00:30:49.706 "name": "BaseBdev4", 00:30:49.706 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:49.706 "is_configured": true, 00:30:49.706 "data_offset": 2048, 00:30:49.706 "data_size": 63488 00:30:49.706 } 00:30:49.706 ] 00:30:49.706 }' 00:30:49.706 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:49.706 [2024-07-13 11:42:24.324393] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:30:49.706 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:49.706 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:49.706 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:49.706 11:42:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:49.964 [2024-07-13 11:42:24.644741] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:50.530 [2024-07-13 11:42:25.178252] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:30:50.789 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:50.789 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:50.789 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:50.789 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:50.789 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:50.789 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:50.789 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.789 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:51.047 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:51.047 "name": "raid_bdev1", 00:30:51.047 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:51.047 "strip_size_kb": 0, 00:30:51.047 "state": "online", 00:30:51.047 "raid_level": "raid1", 00:30:51.047 "superblock": true, 00:30:51.047 "num_base_bdevs": 4, 00:30:51.047 "num_base_bdevs_discovered": 3, 00:30:51.047 "num_base_bdevs_operational": 3, 00:30:51.047 "process": { 00:30:51.047 "type": "rebuild", 00:30:51.047 "target": "spare", 00:30:51.047 "progress": { 00:30:51.047 "blocks": 51200, 00:30:51.047 "percent": 80 00:30:51.047 } 00:30:51.047 }, 00:30:51.047 "base_bdevs_list": [ 00:30:51.047 { 00:30:51.047 "name": "spare", 00:30:51.047 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:51.047 "is_configured": true, 00:30:51.047 "data_offset": 2048, 00:30:51.047 "data_size": 63488 00:30:51.047 }, 00:30:51.047 { 00:30:51.047 "name": null, 00:30:51.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:51.047 "is_configured": false, 00:30:51.047 "data_offset": 2048, 00:30:51.047 "data_size": 63488 00:30:51.047 }, 00:30:51.047 { 00:30:51.047 "name": "BaseBdev3", 00:30:51.047 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:51.047 "is_configured": true, 00:30:51.047 "data_offset": 2048, 00:30:51.047 "data_size": 63488 00:30:51.047 }, 00:30:51.047 { 00:30:51.047 "name": "BaseBdev4", 00:30:51.047 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:51.047 "is_configured": true, 00:30:51.047 "data_offset": 2048, 00:30:51.047 "data_size": 63488 00:30:51.047 } 00:30:51.047 ] 00:30:51.047 }' 00:30:51.047 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:51.047 [2024-07-13 11:42:25.635857] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:30:51.047 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:51.047 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:51.047 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:51.047 11:42:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:51.306 [2024-07-13 11:42:25.965661] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:51.564 [2024-07-13 11:42:26.199767] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:51.564 [2024-07-13 11:42:26.305507] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:51.564 [2024-07-13 11:42:26.307956] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:52.131 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:52.131 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:52.131 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:52.131 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:52.131 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:52.131 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:52.131 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.131 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.390 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.390 "name": "raid_bdev1", 00:30:52.390 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:52.390 "strip_size_kb": 0, 00:30:52.390 "state": "online", 00:30:52.390 "raid_level": "raid1", 00:30:52.390 "superblock": true, 00:30:52.390 "num_base_bdevs": 4, 00:30:52.390 "num_base_bdevs_discovered": 3, 00:30:52.390 "num_base_bdevs_operational": 3, 00:30:52.390 "base_bdevs_list": [ 00:30:52.390 { 00:30:52.390 "name": "spare", 00:30:52.390 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:52.390 "is_configured": true, 00:30:52.390 "data_offset": 2048, 00:30:52.390 "data_size": 63488 00:30:52.390 }, 00:30:52.390 { 00:30:52.390 "name": null, 00:30:52.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.390 "is_configured": false, 00:30:52.390 "data_offset": 2048, 00:30:52.390 "data_size": 63488 00:30:52.390 }, 00:30:52.390 { 00:30:52.390 "name": "BaseBdev3", 00:30:52.390 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:52.390 "is_configured": true, 00:30:52.390 "data_offset": 2048, 00:30:52.390 "data_size": 63488 00:30:52.390 }, 00:30:52.390 { 00:30:52.390 "name": "BaseBdev4", 00:30:52.390 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:52.390 "is_configured": true, 00:30:52.390 "data_offset": 2048, 00:30:52.390 "data_size": 63488 00:30:52.390 } 00:30:52.390 ] 00:30:52.390 }' 00:30:52.390 11:42:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.390 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.648 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:52.649 "name": "raid_bdev1", 00:30:52.649 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:52.649 "strip_size_kb": 0, 00:30:52.649 "state": "online", 00:30:52.649 "raid_level": "raid1", 00:30:52.649 "superblock": true, 00:30:52.649 "num_base_bdevs": 4, 00:30:52.649 "num_base_bdevs_discovered": 3, 00:30:52.649 "num_base_bdevs_operational": 3, 00:30:52.649 "base_bdevs_list": [ 00:30:52.649 { 00:30:52.649 "name": "spare", 00:30:52.649 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:52.649 "is_configured": true, 00:30:52.649 "data_offset": 2048, 00:30:52.649 "data_size": 63488 00:30:52.649 }, 00:30:52.649 { 00:30:52.649 "name": null, 00:30:52.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.649 "is_configured": false, 00:30:52.649 "data_offset": 2048, 00:30:52.649 "data_size": 63488 00:30:52.649 }, 00:30:52.649 { 00:30:52.649 "name": "BaseBdev3", 00:30:52.649 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:52.649 "is_configured": true, 00:30:52.649 "data_offset": 2048, 00:30:52.649 "data_size": 63488 00:30:52.649 }, 00:30:52.649 { 00:30:52.649 "name": "BaseBdev4", 00:30:52.649 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:52.649 "is_configured": true, 00:30:52.649 "data_offset": 2048, 00:30:52.649 "data_size": 63488 00:30:52.649 } 00:30:52.649 ] 00:30:52.649 }' 00:30:52.649 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:52.649 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:52.649 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:52.907 "name": "raid_bdev1", 00:30:52.907 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:52.907 "strip_size_kb": 0, 00:30:52.907 "state": "online", 00:30:52.907 "raid_level": "raid1", 00:30:52.907 "superblock": true, 00:30:52.907 "num_base_bdevs": 4, 00:30:52.907 "num_base_bdevs_discovered": 3, 00:30:52.907 "num_base_bdevs_operational": 3, 00:30:52.907 "base_bdevs_list": [ 00:30:52.907 { 00:30:52.907 "name": "spare", 00:30:52.907 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:52.907 "is_configured": true, 00:30:52.907 "data_offset": 2048, 00:30:52.907 "data_size": 63488 00:30:52.907 }, 00:30:52.907 { 00:30:52.907 "name": null, 00:30:52.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:52.907 "is_configured": false, 00:30:52.907 "data_offset": 2048, 00:30:52.907 "data_size": 63488 00:30:52.907 }, 00:30:52.907 { 00:30:52.907 "name": "BaseBdev3", 00:30:52.907 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:52.907 "is_configured": true, 00:30:52.907 "data_offset": 2048, 00:30:52.907 "data_size": 63488 00:30:52.907 }, 00:30:52.907 { 00:30:52.907 "name": "BaseBdev4", 00:30:52.907 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:52.907 "is_configured": true, 00:30:52.907 "data_offset": 2048, 00:30:52.907 "data_size": 63488 00:30:52.907 } 00:30:52.907 ] 00:30:52.907 }' 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:52.907 11:42:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.843 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:53.843 [2024-07-13 11:42:28.476003] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:53.843 [2024-07-13 11:42:28.476313] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:53.843 00:30:53.843 Latency(us) 00:30:53.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.843 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:53.843 raid_bdev1 : 11.33 110.10 330.31 0.00 0.00 13127.57 307.20 111530.36 00:30:53.843 =================================================================================================================== 00:30:53.843 Total : 110.10 330.31 0.00 0.00 13127.57 307.20 111530.36 00:30:53.843 [2024-07-13 11:42:28.590945] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:53.843 [2024-07-13 11:42:28.591097] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:53.844 0 00:30:53.844 [2024-07-13 11:42:28.591260] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:53.844 [2024-07-13 11:42:28.591277] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:54.102 11:42:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:54.669 /dev/nbd0 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:54.669 1+0 records in 00:30:54.669 1+0 records out 00:30:54.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355855 s, 11.5 MB/s 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:54.669 /dev/nbd1 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:54.669 1+0 records in 00:30:54.669 1+0 records out 00:30:54.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334143 s, 12.3 MB/s 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:54.669 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:54.927 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:54.927 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:54.927 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:54.927 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:54.927 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:54.927 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:54.927 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:55.184 11:42:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:55.442 /dev/nbd1 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:55.442 1+0 records in 00:30:55.442 1+0 records out 00:30:55.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026358 s, 15.5 MB/s 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:55.442 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:55.700 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:55.700 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:55.700 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:55.700 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:55.700 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:55.700 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:55.700 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:55.958 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:30:56.216 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:30:56.216 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:56.216 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:56.216 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:56.216 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:56.216 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:56.216 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:56.474 11:42:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:56.474 [2024-07-13 11:42:31.223331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:56.474 [2024-07-13 11:42:31.223643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.474 [2024-07-13 11:42:31.223733] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:30:56.474 [2024-07-13 11:42:31.224051] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.474 [2024-07-13 11:42:31.226770] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.474 [2024-07-13 11:42:31.226969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:56.474 [2024-07-13 11:42:31.227217] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:56.474 [2024-07-13 11:42:31.227380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:56.733 [2024-07-13 11:42:31.227683] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:56.733 [2024-07-13 11:42:31.227977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:56.733 spare 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.733 [2024-07-13 11:42:31.328195] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:30:56.733 [2024-07-13 11:42:31.328304] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:56.733 [2024-07-13 11:42:31.328448] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a3c0 00:30:56.733 [2024-07-13 11:42:31.328892] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:30:56.733 [2024-07-13 11:42:31.328993] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:30:56.733 [2024-07-13 11:42:31.329217] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:56.733 "name": "raid_bdev1", 00:30:56.733 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:56.733 "strip_size_kb": 0, 00:30:56.733 "state": "online", 00:30:56.733 "raid_level": "raid1", 00:30:56.733 "superblock": true, 00:30:56.733 "num_base_bdevs": 4, 00:30:56.733 "num_base_bdevs_discovered": 3, 00:30:56.733 "num_base_bdevs_operational": 3, 00:30:56.733 "base_bdevs_list": [ 00:30:56.733 { 00:30:56.733 "name": "spare", 00:30:56.733 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:56.733 "is_configured": true, 00:30:56.733 "data_offset": 2048, 00:30:56.733 "data_size": 63488 00:30:56.733 }, 00:30:56.733 { 00:30:56.733 "name": null, 00:30:56.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:56.733 "is_configured": false, 00:30:56.733 "data_offset": 2048, 00:30:56.733 "data_size": 63488 00:30:56.733 }, 00:30:56.733 { 00:30:56.733 "name": "BaseBdev3", 00:30:56.733 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:56.733 "is_configured": true, 00:30:56.733 "data_offset": 2048, 00:30:56.733 "data_size": 63488 00:30:56.733 }, 00:30:56.733 { 00:30:56.733 "name": "BaseBdev4", 00:30:56.733 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:56.733 "is_configured": true, 00:30:56.733 "data_offset": 2048, 00:30:56.733 "data_size": 63488 00:30:56.733 } 00:30:56.733 ] 00:30:56.733 }' 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:56.733 11:42:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:57.670 "name": "raid_bdev1", 00:30:57.670 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:57.670 "strip_size_kb": 0, 00:30:57.670 "state": "online", 00:30:57.670 "raid_level": "raid1", 00:30:57.670 "superblock": true, 00:30:57.670 "num_base_bdevs": 4, 00:30:57.670 "num_base_bdevs_discovered": 3, 00:30:57.670 "num_base_bdevs_operational": 3, 00:30:57.670 "base_bdevs_list": [ 00:30:57.670 { 00:30:57.670 "name": "spare", 00:30:57.670 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:30:57.670 "is_configured": true, 00:30:57.670 "data_offset": 2048, 00:30:57.670 "data_size": 63488 00:30:57.670 }, 00:30:57.670 { 00:30:57.670 "name": null, 00:30:57.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.670 "is_configured": false, 00:30:57.670 "data_offset": 2048, 00:30:57.670 "data_size": 63488 00:30:57.670 }, 00:30:57.670 { 00:30:57.670 "name": "BaseBdev3", 00:30:57.670 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:57.670 "is_configured": true, 00:30:57.670 "data_offset": 2048, 00:30:57.670 "data_size": 63488 00:30:57.670 }, 00:30:57.670 { 00:30:57.670 "name": "BaseBdev4", 00:30:57.670 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:57.670 "is_configured": true, 00:30:57.670 "data_offset": 2048, 00:30:57.670 "data_size": 63488 00:30:57.670 } 00:30:57.670 ] 00:30:57.670 }' 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:57.670 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:57.928 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:57.928 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.928 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:58.187 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:58.187 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:58.445 [2024-07-13 11:42:32.960296] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.445 11:42:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.445 11:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:58.445 "name": "raid_bdev1", 00:30:58.445 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:30:58.445 "strip_size_kb": 0, 00:30:58.445 "state": "online", 00:30:58.445 "raid_level": "raid1", 00:30:58.445 "superblock": true, 00:30:58.445 "num_base_bdevs": 4, 00:30:58.445 "num_base_bdevs_discovered": 2, 00:30:58.445 "num_base_bdevs_operational": 2, 00:30:58.445 "base_bdevs_list": [ 00:30:58.445 { 00:30:58.445 "name": null, 00:30:58.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.445 "is_configured": false, 00:30:58.445 "data_offset": 2048, 00:30:58.445 "data_size": 63488 00:30:58.445 }, 00:30:58.445 { 00:30:58.445 "name": null, 00:30:58.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.445 "is_configured": false, 00:30:58.445 "data_offset": 2048, 00:30:58.445 "data_size": 63488 00:30:58.445 }, 00:30:58.445 { 00:30:58.445 "name": "BaseBdev3", 00:30:58.445 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:30:58.445 "is_configured": true, 00:30:58.445 "data_offset": 2048, 00:30:58.445 "data_size": 63488 00:30:58.445 }, 00:30:58.445 { 00:30:58.445 "name": "BaseBdev4", 00:30:58.445 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:30:58.445 "is_configured": true, 00:30:58.445 "data_offset": 2048, 00:30:58.445 "data_size": 63488 00:30:58.445 } 00:30:58.445 ] 00:30:58.445 }' 00:30:58.445 11:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:58.445 11:42:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:59.379 11:42:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:59.379 [2024-07-13 11:42:34.116645] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:59.379 [2024-07-13 11:42:34.116892] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:59.379 [2024-07-13 11:42:34.117007] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:59.379 [2024-07-13 11:42:34.117090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:59.379 [2024-07-13 11:42:34.127163] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a560 00:30:59.379 [2024-07-13 11:42:34.129458] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:59.638 11:42:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:00.573 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:00.574 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:00.574 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:00.574 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:00.574 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:00.574 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.574 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.832 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:00.832 "name": "raid_bdev1", 00:31:00.832 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:31:00.832 "strip_size_kb": 0, 00:31:00.832 "state": "online", 00:31:00.832 "raid_level": "raid1", 00:31:00.832 "superblock": true, 00:31:00.832 "num_base_bdevs": 4, 00:31:00.832 "num_base_bdevs_discovered": 3, 00:31:00.832 "num_base_bdevs_operational": 3, 00:31:00.832 "process": { 00:31:00.832 "type": "rebuild", 00:31:00.832 "target": "spare", 00:31:00.833 "progress": { 00:31:00.833 "blocks": 24576, 00:31:00.833 "percent": 38 00:31:00.833 } 00:31:00.833 }, 00:31:00.833 "base_bdevs_list": [ 00:31:00.833 { 00:31:00.833 "name": "spare", 00:31:00.833 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:31:00.833 "is_configured": true, 00:31:00.833 "data_offset": 2048, 00:31:00.833 "data_size": 63488 00:31:00.833 }, 00:31:00.833 { 00:31:00.833 "name": null, 00:31:00.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.833 "is_configured": false, 00:31:00.833 "data_offset": 2048, 00:31:00.833 "data_size": 63488 00:31:00.833 }, 00:31:00.833 { 00:31:00.833 "name": "BaseBdev3", 00:31:00.833 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:31:00.833 "is_configured": true, 00:31:00.833 "data_offset": 2048, 00:31:00.833 "data_size": 63488 00:31:00.833 }, 00:31:00.833 { 00:31:00.833 "name": "BaseBdev4", 00:31:00.833 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:31:00.833 "is_configured": true, 00:31:00.833 "data_offset": 2048, 00:31:00.833 "data_size": 63488 00:31:00.833 } 00:31:00.833 ] 00:31:00.833 }' 00:31:00.833 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:00.833 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:00.833 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:00.833 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:00.833 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:01.091 [2024-07-13 11:42:35.736279] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:01.091 [2024-07-13 11:42:35.739662] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:01.091 [2024-07-13 11:42:35.739848] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.091 [2024-07-13 11:42:35.739899] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:01.091 [2024-07-13 11:42:35.740036] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:01.091 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:01.091 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:01.091 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:01.091 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:01.091 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:01.091 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:01.091 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:01.091 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:01.091 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:01.092 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:01.092 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.092 11:42:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.351 11:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:01.351 "name": "raid_bdev1", 00:31:01.351 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:31:01.351 "strip_size_kb": 0, 00:31:01.351 "state": "online", 00:31:01.351 "raid_level": "raid1", 00:31:01.351 "superblock": true, 00:31:01.351 "num_base_bdevs": 4, 00:31:01.351 "num_base_bdevs_discovered": 2, 00:31:01.351 "num_base_bdevs_operational": 2, 00:31:01.351 "base_bdevs_list": [ 00:31:01.351 { 00:31:01.351 "name": null, 00:31:01.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.351 "is_configured": false, 00:31:01.351 "data_offset": 2048, 00:31:01.351 "data_size": 63488 00:31:01.351 }, 00:31:01.351 { 00:31:01.351 "name": null, 00:31:01.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.351 "is_configured": false, 00:31:01.351 "data_offset": 2048, 00:31:01.351 "data_size": 63488 00:31:01.351 }, 00:31:01.351 { 00:31:01.351 "name": "BaseBdev3", 00:31:01.351 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:31:01.351 "is_configured": true, 00:31:01.351 "data_offset": 2048, 00:31:01.351 "data_size": 63488 00:31:01.351 }, 00:31:01.351 { 00:31:01.351 "name": "BaseBdev4", 00:31:01.351 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:31:01.351 "is_configured": true, 00:31:01.351 "data_offset": 2048, 00:31:01.351 "data_size": 63488 00:31:01.351 } 00:31:01.351 ] 00:31:01.351 }' 00:31:01.351 11:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:01.351 11:42:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:01.918 11:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:02.177 [2024-07-13 11:42:36.838006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:02.177 [2024-07-13 11:42:36.838258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.177 [2024-07-13 11:42:36.838438] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:31:02.177 [2024-07-13 11:42:36.838557] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.177 [2024-07-13 11:42:36.839198] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.177 [2024-07-13 11:42:36.839359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:02.177 [2024-07-13 11:42:36.839523] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:02.177 [2024-07-13 11:42:36.839623] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:02.177 [2024-07-13 11:42:36.839710] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:02.177 [2024-07-13 11:42:36.839792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:02.177 [2024-07-13 11:42:36.849275] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a8a0 00:31:02.177 spare 00:31:02.177 [2024-07-13 11:42:36.851350] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:02.177 11:42:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:03.112 11:42:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:03.112 11:42:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:03.112 11:42:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:03.112 11:42:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:03.112 11:42:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:03.112 11:42:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.112 11:42:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.371 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:03.371 "name": "raid_bdev1", 00:31:03.371 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:31:03.371 "strip_size_kb": 0, 00:31:03.371 "state": "online", 00:31:03.371 "raid_level": "raid1", 00:31:03.371 "superblock": true, 00:31:03.371 "num_base_bdevs": 4, 00:31:03.371 "num_base_bdevs_discovered": 3, 00:31:03.371 "num_base_bdevs_operational": 3, 00:31:03.371 "process": { 00:31:03.371 "type": "rebuild", 00:31:03.371 "target": "spare", 00:31:03.371 "progress": { 00:31:03.371 "blocks": 24576, 00:31:03.371 "percent": 38 00:31:03.371 } 00:31:03.371 }, 00:31:03.371 "base_bdevs_list": [ 00:31:03.371 { 00:31:03.371 "name": "spare", 00:31:03.371 "uuid": "2b186cab-28a7-58cf-8993-5f9ae6365be4", 00:31:03.371 "is_configured": true, 00:31:03.371 "data_offset": 2048, 00:31:03.371 "data_size": 63488 00:31:03.371 }, 00:31:03.371 { 00:31:03.371 "name": null, 00:31:03.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.371 "is_configured": false, 00:31:03.371 "data_offset": 2048, 00:31:03.371 "data_size": 63488 00:31:03.371 }, 00:31:03.371 { 00:31:03.371 "name": "BaseBdev3", 00:31:03.371 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:31:03.371 "is_configured": true, 00:31:03.371 "data_offset": 2048, 00:31:03.371 "data_size": 63488 00:31:03.371 }, 00:31:03.371 { 00:31:03.371 "name": "BaseBdev4", 00:31:03.371 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:31:03.371 "is_configured": true, 00:31:03.371 "data_offset": 2048, 00:31:03.371 "data_size": 63488 00:31:03.371 } 00:31:03.371 ] 00:31:03.371 }' 00:31:03.371 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:03.631 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:03.631 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:03.631 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:03.631 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:03.889 [2024-07-13 11:42:38.453822] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:03.889 [2024-07-13 11:42:38.460280] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:03.889 [2024-07-13 11:42:38.460465] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:03.889 [2024-07-13 11:42:38.460515] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:03.889 [2024-07-13 11:42:38.460650] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.889 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.148 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:04.148 "name": "raid_bdev1", 00:31:04.148 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:31:04.148 "strip_size_kb": 0, 00:31:04.148 "state": "online", 00:31:04.148 "raid_level": "raid1", 00:31:04.148 "superblock": true, 00:31:04.148 "num_base_bdevs": 4, 00:31:04.148 "num_base_bdevs_discovered": 2, 00:31:04.148 "num_base_bdevs_operational": 2, 00:31:04.148 "base_bdevs_list": [ 00:31:04.148 { 00:31:04.148 "name": null, 00:31:04.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.148 "is_configured": false, 00:31:04.148 "data_offset": 2048, 00:31:04.148 "data_size": 63488 00:31:04.148 }, 00:31:04.148 { 00:31:04.148 "name": null, 00:31:04.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.148 "is_configured": false, 00:31:04.148 "data_offset": 2048, 00:31:04.148 "data_size": 63488 00:31:04.148 }, 00:31:04.148 { 00:31:04.148 "name": "BaseBdev3", 00:31:04.148 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:31:04.148 "is_configured": true, 00:31:04.148 "data_offset": 2048, 00:31:04.148 "data_size": 63488 00:31:04.148 }, 00:31:04.148 { 00:31:04.148 "name": "BaseBdev4", 00:31:04.148 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:31:04.148 "is_configured": true, 00:31:04.148 "data_offset": 2048, 00:31:04.148 "data_size": 63488 00:31:04.148 } 00:31:04.148 ] 00:31:04.148 }' 00:31:04.148 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:04.148 11:42:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:04.715 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:04.715 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:04.715 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:04.715 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:04.715 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:04.715 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.715 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.973 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:04.973 "name": "raid_bdev1", 00:31:04.973 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:31:04.973 "strip_size_kb": 0, 00:31:04.973 "state": "online", 00:31:04.973 "raid_level": "raid1", 00:31:04.973 "superblock": true, 00:31:04.973 "num_base_bdevs": 4, 00:31:04.973 "num_base_bdevs_discovered": 2, 00:31:04.973 "num_base_bdevs_operational": 2, 00:31:04.973 "base_bdevs_list": [ 00:31:04.973 { 00:31:04.973 "name": null, 00:31:04.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.973 "is_configured": false, 00:31:04.973 "data_offset": 2048, 00:31:04.973 "data_size": 63488 00:31:04.973 }, 00:31:04.973 { 00:31:04.973 "name": null, 00:31:04.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.973 "is_configured": false, 00:31:04.973 "data_offset": 2048, 00:31:04.973 "data_size": 63488 00:31:04.973 }, 00:31:04.973 { 00:31:04.973 "name": "BaseBdev3", 00:31:04.973 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:31:04.973 "is_configured": true, 00:31:04.973 "data_offset": 2048, 00:31:04.973 "data_size": 63488 00:31:04.973 }, 00:31:04.973 { 00:31:04.973 "name": "BaseBdev4", 00:31:04.973 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:31:04.973 "is_configured": true, 00:31:04.973 "data_offset": 2048, 00:31:04.973 "data_size": 63488 00:31:04.973 } 00:31:04.973 ] 00:31:04.973 }' 00:31:04.973 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:04.973 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:04.973 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:04.973 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:04.973 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:05.232 11:42:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:05.490 [2024-07-13 11:42:40.078811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:05.490 [2024-07-13 11:42:40.079026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.490 [2024-07-13 11:42:40.079098] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:31:05.490 [2024-07-13 11:42:40.079216] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.490 [2024-07-13 11:42:40.079714] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.490 [2024-07-13 11:42:40.079871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:05.490 [2024-07-13 11:42:40.080092] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:05.490 [2024-07-13 11:42:40.080276] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:05.490 [2024-07-13 11:42:40.080366] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:05.490 BaseBdev1 00:31:05.490 11:42:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.425 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.684 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:06.684 "name": "raid_bdev1", 00:31:06.684 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:31:06.684 "strip_size_kb": 0, 00:31:06.684 "state": "online", 00:31:06.684 "raid_level": "raid1", 00:31:06.684 "superblock": true, 00:31:06.684 "num_base_bdevs": 4, 00:31:06.684 "num_base_bdevs_discovered": 2, 00:31:06.684 "num_base_bdevs_operational": 2, 00:31:06.684 "base_bdevs_list": [ 00:31:06.684 { 00:31:06.684 "name": null, 00:31:06.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.684 "is_configured": false, 00:31:06.684 "data_offset": 2048, 00:31:06.684 "data_size": 63488 00:31:06.684 }, 00:31:06.684 { 00:31:06.684 "name": null, 00:31:06.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.684 "is_configured": false, 00:31:06.684 "data_offset": 2048, 00:31:06.684 "data_size": 63488 00:31:06.684 }, 00:31:06.684 { 00:31:06.684 "name": "BaseBdev3", 00:31:06.684 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:31:06.684 "is_configured": true, 00:31:06.684 "data_offset": 2048, 00:31:06.684 "data_size": 63488 00:31:06.684 }, 00:31:06.684 { 00:31:06.684 "name": "BaseBdev4", 00:31:06.684 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:31:06.684 "is_configured": true, 00:31:06.684 "data_offset": 2048, 00:31:06.684 "data_size": 63488 00:31:06.684 } 00:31:06.684 ] 00:31:06.684 }' 00:31:06.684 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:06.684 11:42:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:07.621 "name": "raid_bdev1", 00:31:07.621 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:31:07.621 "strip_size_kb": 0, 00:31:07.621 "state": "online", 00:31:07.621 "raid_level": "raid1", 00:31:07.621 "superblock": true, 00:31:07.621 "num_base_bdevs": 4, 00:31:07.621 "num_base_bdevs_discovered": 2, 00:31:07.621 "num_base_bdevs_operational": 2, 00:31:07.621 "base_bdevs_list": [ 00:31:07.621 { 00:31:07.621 "name": null, 00:31:07.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.621 "is_configured": false, 00:31:07.621 "data_offset": 2048, 00:31:07.621 "data_size": 63488 00:31:07.621 }, 00:31:07.621 { 00:31:07.621 "name": null, 00:31:07.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:07.621 "is_configured": false, 00:31:07.621 "data_offset": 2048, 00:31:07.621 "data_size": 63488 00:31:07.621 }, 00:31:07.621 { 00:31:07.621 "name": "BaseBdev3", 00:31:07.621 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:31:07.621 "is_configured": true, 00:31:07.621 "data_offset": 2048, 00:31:07.621 "data_size": 63488 00:31:07.621 }, 00:31:07.621 { 00:31:07.621 "name": "BaseBdev4", 00:31:07.621 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:31:07.621 "is_configured": true, 00:31:07.621 "data_offset": 2048, 00:31:07.621 "data_size": 63488 00:31:07.621 } 00:31:07.621 ] 00:31:07.621 }' 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:07.621 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:07.895 [2024-07-13 11:42:42.595606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:07.895 [2024-07-13 11:42:42.595829] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:07.895 [2024-07-13 11:42:42.595939] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:07.895 request: 00:31:07.895 { 00:31:07.895 "base_bdev": "BaseBdev1", 00:31:07.895 "raid_bdev": "raid_bdev1", 00:31:07.895 "method": "bdev_raid_add_base_bdev", 00:31:07.895 "req_id": 1 00:31:07.895 } 00:31:07.895 Got JSON-RPC error response 00:31:07.895 response: 00:31:07.895 { 00:31:07.895 "code": -22, 00:31:07.895 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:07.895 } 00:31:07.895 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:31:07.895 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:07.895 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:07.895 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:07.895 11:42:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.868 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.126 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:09.126 "name": "raid_bdev1", 00:31:09.126 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:31:09.126 "strip_size_kb": 0, 00:31:09.126 "state": "online", 00:31:09.126 "raid_level": "raid1", 00:31:09.126 "superblock": true, 00:31:09.126 "num_base_bdevs": 4, 00:31:09.126 "num_base_bdevs_discovered": 2, 00:31:09.126 "num_base_bdevs_operational": 2, 00:31:09.126 "base_bdevs_list": [ 00:31:09.126 { 00:31:09.126 "name": null, 00:31:09.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.126 "is_configured": false, 00:31:09.126 "data_offset": 2048, 00:31:09.126 "data_size": 63488 00:31:09.126 }, 00:31:09.126 { 00:31:09.126 "name": null, 00:31:09.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.126 "is_configured": false, 00:31:09.126 "data_offset": 2048, 00:31:09.126 "data_size": 63488 00:31:09.126 }, 00:31:09.126 { 00:31:09.126 "name": "BaseBdev3", 00:31:09.127 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:31:09.127 "is_configured": true, 00:31:09.127 "data_offset": 2048, 00:31:09.127 "data_size": 63488 00:31:09.127 }, 00:31:09.127 { 00:31:09.127 "name": "BaseBdev4", 00:31:09.127 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:31:09.127 "is_configured": true, 00:31:09.127 "data_offset": 2048, 00:31:09.127 "data_size": 63488 00:31:09.127 } 00:31:09.127 ] 00:31:09.127 }' 00:31:09.127 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:09.127 11:42:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:10.062 "name": "raid_bdev1", 00:31:10.062 "uuid": "fc10bab5-c34c-4f84-9296-98146ad28ac1", 00:31:10.062 "strip_size_kb": 0, 00:31:10.062 "state": "online", 00:31:10.062 "raid_level": "raid1", 00:31:10.062 "superblock": true, 00:31:10.062 "num_base_bdevs": 4, 00:31:10.062 "num_base_bdevs_discovered": 2, 00:31:10.062 "num_base_bdevs_operational": 2, 00:31:10.062 "base_bdevs_list": [ 00:31:10.062 { 00:31:10.062 "name": null, 00:31:10.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.062 "is_configured": false, 00:31:10.062 "data_offset": 2048, 00:31:10.062 "data_size": 63488 00:31:10.062 }, 00:31:10.062 { 00:31:10.062 "name": null, 00:31:10.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.062 "is_configured": false, 00:31:10.062 "data_offset": 2048, 00:31:10.062 "data_size": 63488 00:31:10.062 }, 00:31:10.062 { 00:31:10.062 "name": "BaseBdev3", 00:31:10.062 "uuid": "469db0d3-16ce-5cab-a88b-dff971c2cbcd", 00:31:10.062 "is_configured": true, 00:31:10.062 "data_offset": 2048, 00:31:10.062 "data_size": 63488 00:31:10.062 }, 00:31:10.062 { 00:31:10.062 "name": "BaseBdev4", 00:31:10.062 "uuid": "0bdadbbd-7dbd-5e24-b142-241a26fdaafc", 00:31:10.062 "is_configured": true, 00:31:10.062 "data_offset": 2048, 00:31:10.062 "data_size": 63488 00:31:10.062 } 00:31:10.062 ] 00:31:10.062 }' 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:10.062 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 149990 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 149990 ']' 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 149990 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149990 00:31:10.321 killing process with pid 149990 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149990' 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 149990 00:31:10.321 11:42:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 149990 00:31:10.321 Received shutdown signal, test time was about 27.656786 seconds 00:31:10.321 00:31:10.321 Latency(us) 00:31:10.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.321 =================================================================================================================== 00:31:10.321 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:10.321 [2024-07-13 11:42:44.898006] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:10.321 [2024-07-13 11:42:44.898124] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:10.321 [2024-07-13 11:42:44.898184] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:10.321 [2024-07-13 11:42:44.898239] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:31:10.580 [2024-07-13 11:42:45.189132] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:11.515 ************************************ 00:31:11.515 END TEST raid_rebuild_test_sb_io 00:31:11.515 ************************************ 00:31:11.515 11:42:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:31:11.515 00:31:11.515 real 0m34.121s 00:31:11.515 user 0m55.416s 00:31:11.515 sys 0m3.153s 00:31:11.515 11:42:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:11.515 11:42:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:11.774 11:42:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:11.774 11:42:46 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' y == y ']' 00:31:11.774 11:42:46 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:31:11.774 11:42:46 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:31:11.774 11:42:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:11.774 11:42:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.774 11:42:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:11.774 ************************************ 00:31:11.774 START TEST raid5f_state_function_test 00:31:11.774 ************************************ 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 false 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=150957 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:11.774 Process raid pid: 150957 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 150957' 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 150957 /var/tmp/spdk-raid.sock 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 150957 ']' 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:11.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:11.774 11:42:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.774 [2024-07-13 11:42:46.387968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:11.774 [2024-07-13 11:42:46.388322] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.033 [2024-07-13 11:42:46.555224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.033 [2024-07-13 11:42:46.736889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.291 [2024-07-13 11:42:46.925743] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:12.549 11:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:12.549 11:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:31:12.549 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:12.808 [2024-07-13 11:42:47.499678] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:12.808 [2024-07-13 11:42:47.500014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:12.808 [2024-07-13 11:42:47.500121] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:12.808 [2024-07-13 11:42:47.500185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:12.808 [2024-07-13 11:42:47.500272] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:12.808 [2024-07-13 11:42:47.500322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.808 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:13.067 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.067 "name": "Existed_Raid", 00:31:13.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.067 "strip_size_kb": 64, 00:31:13.067 "state": "configuring", 00:31:13.067 "raid_level": "raid5f", 00:31:13.067 "superblock": false, 00:31:13.067 "num_base_bdevs": 3, 00:31:13.067 "num_base_bdevs_discovered": 0, 00:31:13.067 "num_base_bdevs_operational": 3, 00:31:13.067 "base_bdevs_list": [ 00:31:13.067 { 00:31:13.067 "name": "BaseBdev1", 00:31:13.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.067 "is_configured": false, 00:31:13.067 "data_offset": 0, 00:31:13.067 "data_size": 0 00:31:13.067 }, 00:31:13.067 { 00:31:13.067 "name": "BaseBdev2", 00:31:13.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.067 "is_configured": false, 00:31:13.067 "data_offset": 0, 00:31:13.067 "data_size": 0 00:31:13.067 }, 00:31:13.067 { 00:31:13.067 "name": "BaseBdev3", 00:31:13.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.067 "is_configured": false, 00:31:13.067 "data_offset": 0, 00:31:13.067 "data_size": 0 00:31:13.067 } 00:31:13.067 ] 00:31:13.067 }' 00:31:13.067 11:42:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.067 11:42:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:14.000 11:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:14.000 [2024-07-13 11:42:48.647762] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:14.000 [2024-07-13 11:42:48.647912] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:31:14.000 11:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:14.258 [2024-07-13 11:42:48.915811] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:14.258 [2024-07-13 11:42:48.915985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:14.258 [2024-07-13 11:42:48.916080] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:14.258 [2024-07-13 11:42:48.916235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:14.258 [2024-07-13 11:42:48.916328] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:14.258 [2024-07-13 11:42:48.916384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:14.258 11:42:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:14.517 [2024-07-13 11:42:49.217389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:14.517 BaseBdev1 00:31:14.517 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:14.517 11:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:14.517 11:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:14.517 11:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:14.517 11:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:14.517 11:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:14.517 11:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:14.775 11:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:15.032 [ 00:31:15.032 { 00:31:15.032 "name": "BaseBdev1", 00:31:15.032 "aliases": [ 00:31:15.032 "62af0932-0831-4cab-8b40-f6c86ce7cc5d" 00:31:15.032 ], 00:31:15.032 "product_name": "Malloc disk", 00:31:15.032 "block_size": 512, 00:31:15.032 "num_blocks": 65536, 00:31:15.032 "uuid": "62af0932-0831-4cab-8b40-f6c86ce7cc5d", 00:31:15.032 "assigned_rate_limits": { 00:31:15.032 "rw_ios_per_sec": 0, 00:31:15.032 "rw_mbytes_per_sec": 0, 00:31:15.032 "r_mbytes_per_sec": 0, 00:31:15.032 "w_mbytes_per_sec": 0 00:31:15.032 }, 00:31:15.032 "claimed": true, 00:31:15.032 "claim_type": "exclusive_write", 00:31:15.032 "zoned": false, 00:31:15.032 "supported_io_types": { 00:31:15.032 "read": true, 00:31:15.032 "write": true, 00:31:15.032 "unmap": true, 00:31:15.032 "flush": true, 00:31:15.032 "reset": true, 00:31:15.032 "nvme_admin": false, 00:31:15.032 "nvme_io": false, 00:31:15.032 "nvme_io_md": false, 00:31:15.032 "write_zeroes": true, 00:31:15.032 "zcopy": true, 00:31:15.032 "get_zone_info": false, 00:31:15.032 "zone_management": false, 00:31:15.032 "zone_append": false, 00:31:15.032 "compare": false, 00:31:15.032 "compare_and_write": false, 00:31:15.032 "abort": true, 00:31:15.032 "seek_hole": false, 00:31:15.032 "seek_data": false, 00:31:15.032 "copy": true, 00:31:15.032 "nvme_iov_md": false 00:31:15.032 }, 00:31:15.032 "memory_domains": [ 00:31:15.032 { 00:31:15.032 "dma_device_id": "system", 00:31:15.032 "dma_device_type": 1 00:31:15.032 }, 00:31:15.032 { 00:31:15.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.032 "dma_device_type": 2 00:31:15.032 } 00:31:15.032 ], 00:31:15.032 "driver_specific": {} 00:31:15.032 } 00:31:15.032 ] 00:31:15.032 11:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:15.032 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:15.032 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:15.032 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:15.032 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:15.033 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:15.033 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:15.033 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:15.033 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:15.033 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:15.033 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:15.033 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.033 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:15.290 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:15.290 "name": "Existed_Raid", 00:31:15.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.290 "strip_size_kb": 64, 00:31:15.290 "state": "configuring", 00:31:15.290 "raid_level": "raid5f", 00:31:15.290 "superblock": false, 00:31:15.290 "num_base_bdevs": 3, 00:31:15.290 "num_base_bdevs_discovered": 1, 00:31:15.290 "num_base_bdevs_operational": 3, 00:31:15.290 "base_bdevs_list": [ 00:31:15.290 { 00:31:15.290 "name": "BaseBdev1", 00:31:15.290 "uuid": "62af0932-0831-4cab-8b40-f6c86ce7cc5d", 00:31:15.290 "is_configured": true, 00:31:15.290 "data_offset": 0, 00:31:15.290 "data_size": 65536 00:31:15.290 }, 00:31:15.290 { 00:31:15.290 "name": "BaseBdev2", 00:31:15.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.290 "is_configured": false, 00:31:15.290 "data_offset": 0, 00:31:15.290 "data_size": 0 00:31:15.290 }, 00:31:15.290 { 00:31:15.290 "name": "BaseBdev3", 00:31:15.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.290 "is_configured": false, 00:31:15.290 "data_offset": 0, 00:31:15.290 "data_size": 0 00:31:15.290 } 00:31:15.290 ] 00:31:15.290 }' 00:31:15.290 11:42:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:15.290 11:42:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.856 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:16.114 [2024-07-13 11:42:50.793692] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:16.114 [2024-07-13 11:42:50.793847] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:31:16.114 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:16.372 [2024-07-13 11:42:50.977742] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:16.372 [2024-07-13 11:42:50.979757] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:16.372 [2024-07-13 11:42:50.979935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:16.372 [2024-07-13 11:42:50.980069] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:16.372 [2024-07-13 11:42:50.980151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.372 11:42:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:16.630 11:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.630 "name": "Existed_Raid", 00:31:16.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.630 "strip_size_kb": 64, 00:31:16.630 "state": "configuring", 00:31:16.630 "raid_level": "raid5f", 00:31:16.630 "superblock": false, 00:31:16.630 "num_base_bdevs": 3, 00:31:16.630 "num_base_bdevs_discovered": 1, 00:31:16.630 "num_base_bdevs_operational": 3, 00:31:16.630 "base_bdevs_list": [ 00:31:16.630 { 00:31:16.630 "name": "BaseBdev1", 00:31:16.630 "uuid": "62af0932-0831-4cab-8b40-f6c86ce7cc5d", 00:31:16.630 "is_configured": true, 00:31:16.630 "data_offset": 0, 00:31:16.630 "data_size": 65536 00:31:16.630 }, 00:31:16.630 { 00:31:16.630 "name": "BaseBdev2", 00:31:16.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.630 "is_configured": false, 00:31:16.630 "data_offset": 0, 00:31:16.630 "data_size": 0 00:31:16.630 }, 00:31:16.630 { 00:31:16.630 "name": "BaseBdev3", 00:31:16.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.630 "is_configured": false, 00:31:16.630 "data_offset": 0, 00:31:16.630 "data_size": 0 00:31:16.630 } 00:31:16.630 ] 00:31:16.630 }' 00:31:16.630 11:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.630 11:42:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.197 11:42:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:17.455 [2024-07-13 11:42:52.033772] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:17.455 BaseBdev2 00:31:17.455 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:17.455 11:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:17.455 11:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:17.455 11:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:17.455 11:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:17.455 11:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:17.455 11:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:17.714 [ 00:31:17.714 { 00:31:17.714 "name": "BaseBdev2", 00:31:17.714 "aliases": [ 00:31:17.714 "43832947-706c-4c5b-87f0-21bb3c471d62" 00:31:17.714 ], 00:31:17.714 "product_name": "Malloc disk", 00:31:17.714 "block_size": 512, 00:31:17.714 "num_blocks": 65536, 00:31:17.714 "uuid": "43832947-706c-4c5b-87f0-21bb3c471d62", 00:31:17.714 "assigned_rate_limits": { 00:31:17.714 "rw_ios_per_sec": 0, 00:31:17.714 "rw_mbytes_per_sec": 0, 00:31:17.714 "r_mbytes_per_sec": 0, 00:31:17.714 "w_mbytes_per_sec": 0 00:31:17.714 }, 00:31:17.714 "claimed": true, 00:31:17.714 "claim_type": "exclusive_write", 00:31:17.714 "zoned": false, 00:31:17.714 "supported_io_types": { 00:31:17.714 "read": true, 00:31:17.714 "write": true, 00:31:17.714 "unmap": true, 00:31:17.714 "flush": true, 00:31:17.714 "reset": true, 00:31:17.714 "nvme_admin": false, 00:31:17.714 "nvme_io": false, 00:31:17.714 "nvme_io_md": false, 00:31:17.714 "write_zeroes": true, 00:31:17.714 "zcopy": true, 00:31:17.714 "get_zone_info": false, 00:31:17.714 "zone_management": false, 00:31:17.714 "zone_append": false, 00:31:17.714 "compare": false, 00:31:17.714 "compare_and_write": false, 00:31:17.714 "abort": true, 00:31:17.714 "seek_hole": false, 00:31:17.714 "seek_data": false, 00:31:17.714 "copy": true, 00:31:17.714 "nvme_iov_md": false 00:31:17.714 }, 00:31:17.714 "memory_domains": [ 00:31:17.714 { 00:31:17.714 "dma_device_id": "system", 00:31:17.714 "dma_device_type": 1 00:31:17.714 }, 00:31:17.714 { 00:31:17.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:17.714 "dma_device_type": 2 00:31:17.714 } 00:31:17.714 ], 00:31:17.714 "driver_specific": {} 00:31:17.714 } 00:31:17.714 ] 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:17.714 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:17.715 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.715 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:17.973 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:17.973 "name": "Existed_Raid", 00:31:17.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:17.973 "strip_size_kb": 64, 00:31:17.973 "state": "configuring", 00:31:17.973 "raid_level": "raid5f", 00:31:17.973 "superblock": false, 00:31:17.973 "num_base_bdevs": 3, 00:31:17.973 "num_base_bdevs_discovered": 2, 00:31:17.973 "num_base_bdevs_operational": 3, 00:31:17.973 "base_bdevs_list": [ 00:31:17.973 { 00:31:17.973 "name": "BaseBdev1", 00:31:17.973 "uuid": "62af0932-0831-4cab-8b40-f6c86ce7cc5d", 00:31:17.973 "is_configured": true, 00:31:17.973 "data_offset": 0, 00:31:17.973 "data_size": 65536 00:31:17.973 }, 00:31:17.973 { 00:31:17.973 "name": "BaseBdev2", 00:31:17.973 "uuid": "43832947-706c-4c5b-87f0-21bb3c471d62", 00:31:17.973 "is_configured": true, 00:31:17.973 "data_offset": 0, 00:31:17.973 "data_size": 65536 00:31:17.973 }, 00:31:17.973 { 00:31:17.973 "name": "BaseBdev3", 00:31:17.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:17.973 "is_configured": false, 00:31:17.973 "data_offset": 0, 00:31:17.973 "data_size": 0 00:31:17.973 } 00:31:17.973 ] 00:31:17.973 }' 00:31:17.973 11:42:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:17.973 11:42:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:18.908 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:18.908 [2024-07-13 11:42:53.581338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:18.908 [2024-07-13 11:42:53.581588] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:31:18.908 [2024-07-13 11:42:53.581631] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:18.908 [2024-07-13 11:42:53.581862] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:31:18.908 [2024-07-13 11:42:53.586316] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:31:18.908 [2024-07-13 11:42:53.586460] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:31:18.908 BaseBdev3 00:31:18.908 [2024-07-13 11:42:53.586807] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:18.908 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:31:18.908 11:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:31:18.908 11:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:18.908 11:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:18.908 11:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:18.908 11:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:18.908 11:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:19.165 11:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:19.423 [ 00:31:19.423 { 00:31:19.423 "name": "BaseBdev3", 00:31:19.423 "aliases": [ 00:31:19.423 "9ae31a7c-a020-4aab-8a84-f919202258b1" 00:31:19.423 ], 00:31:19.423 "product_name": "Malloc disk", 00:31:19.423 "block_size": 512, 00:31:19.423 "num_blocks": 65536, 00:31:19.423 "uuid": "9ae31a7c-a020-4aab-8a84-f919202258b1", 00:31:19.423 "assigned_rate_limits": { 00:31:19.423 "rw_ios_per_sec": 0, 00:31:19.423 "rw_mbytes_per_sec": 0, 00:31:19.423 "r_mbytes_per_sec": 0, 00:31:19.423 "w_mbytes_per_sec": 0 00:31:19.423 }, 00:31:19.423 "claimed": true, 00:31:19.423 "claim_type": "exclusive_write", 00:31:19.423 "zoned": false, 00:31:19.423 "supported_io_types": { 00:31:19.423 "read": true, 00:31:19.423 "write": true, 00:31:19.423 "unmap": true, 00:31:19.423 "flush": true, 00:31:19.423 "reset": true, 00:31:19.423 "nvme_admin": false, 00:31:19.423 "nvme_io": false, 00:31:19.423 "nvme_io_md": false, 00:31:19.423 "write_zeroes": true, 00:31:19.423 "zcopy": true, 00:31:19.423 "get_zone_info": false, 00:31:19.423 "zone_management": false, 00:31:19.423 "zone_append": false, 00:31:19.423 "compare": false, 00:31:19.423 "compare_and_write": false, 00:31:19.423 "abort": true, 00:31:19.423 "seek_hole": false, 00:31:19.423 "seek_data": false, 00:31:19.423 "copy": true, 00:31:19.423 "nvme_iov_md": false 00:31:19.423 }, 00:31:19.423 "memory_domains": [ 00:31:19.423 { 00:31:19.423 "dma_device_id": "system", 00:31:19.423 "dma_device_type": 1 00:31:19.423 }, 00:31:19.423 { 00:31:19.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:19.423 "dma_device_type": 2 00:31:19.423 } 00:31:19.423 ], 00:31:19.423 "driver_specific": {} 00:31:19.423 } 00:31:19.423 ] 00:31:19.423 11:42:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:19.423 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:19.423 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:19.423 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:19.423 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.424 11:42:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:19.681 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:19.681 "name": "Existed_Raid", 00:31:19.681 "uuid": "72fda673-ebe4-49b2-b996-e25d315a82e4", 00:31:19.681 "strip_size_kb": 64, 00:31:19.681 "state": "online", 00:31:19.681 "raid_level": "raid5f", 00:31:19.681 "superblock": false, 00:31:19.681 "num_base_bdevs": 3, 00:31:19.682 "num_base_bdevs_discovered": 3, 00:31:19.682 "num_base_bdevs_operational": 3, 00:31:19.682 "base_bdevs_list": [ 00:31:19.682 { 00:31:19.682 "name": "BaseBdev1", 00:31:19.682 "uuid": "62af0932-0831-4cab-8b40-f6c86ce7cc5d", 00:31:19.682 "is_configured": true, 00:31:19.682 "data_offset": 0, 00:31:19.682 "data_size": 65536 00:31:19.682 }, 00:31:19.682 { 00:31:19.682 "name": "BaseBdev2", 00:31:19.682 "uuid": "43832947-706c-4c5b-87f0-21bb3c471d62", 00:31:19.682 "is_configured": true, 00:31:19.682 "data_offset": 0, 00:31:19.682 "data_size": 65536 00:31:19.682 }, 00:31:19.682 { 00:31:19.682 "name": "BaseBdev3", 00:31:19.682 "uuid": "9ae31a7c-a020-4aab-8a84-f919202258b1", 00:31:19.682 "is_configured": true, 00:31:19.682 "data_offset": 0, 00:31:19.682 "data_size": 65536 00:31:19.682 } 00:31:19.682 ] 00:31:19.682 }' 00:31:19.682 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:19.682 11:42:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.247 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:20.248 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:20.248 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:20.248 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:20.248 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:20.248 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:20.248 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:20.248 11:42:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:20.505 [2024-07-13 11:42:55.124157] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:20.505 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:20.505 "name": "Existed_Raid", 00:31:20.505 "aliases": [ 00:31:20.505 "72fda673-ebe4-49b2-b996-e25d315a82e4" 00:31:20.505 ], 00:31:20.505 "product_name": "Raid Volume", 00:31:20.505 "block_size": 512, 00:31:20.505 "num_blocks": 131072, 00:31:20.505 "uuid": "72fda673-ebe4-49b2-b996-e25d315a82e4", 00:31:20.505 "assigned_rate_limits": { 00:31:20.505 "rw_ios_per_sec": 0, 00:31:20.505 "rw_mbytes_per_sec": 0, 00:31:20.505 "r_mbytes_per_sec": 0, 00:31:20.505 "w_mbytes_per_sec": 0 00:31:20.505 }, 00:31:20.505 "claimed": false, 00:31:20.505 "zoned": false, 00:31:20.505 "supported_io_types": { 00:31:20.505 "read": true, 00:31:20.505 "write": true, 00:31:20.506 "unmap": false, 00:31:20.506 "flush": false, 00:31:20.506 "reset": true, 00:31:20.506 "nvme_admin": false, 00:31:20.506 "nvme_io": false, 00:31:20.506 "nvme_io_md": false, 00:31:20.506 "write_zeroes": true, 00:31:20.506 "zcopy": false, 00:31:20.506 "get_zone_info": false, 00:31:20.506 "zone_management": false, 00:31:20.506 "zone_append": false, 00:31:20.506 "compare": false, 00:31:20.506 "compare_and_write": false, 00:31:20.506 "abort": false, 00:31:20.506 "seek_hole": false, 00:31:20.506 "seek_data": false, 00:31:20.506 "copy": false, 00:31:20.506 "nvme_iov_md": false 00:31:20.506 }, 00:31:20.506 "driver_specific": { 00:31:20.506 "raid": { 00:31:20.506 "uuid": "72fda673-ebe4-49b2-b996-e25d315a82e4", 00:31:20.506 "strip_size_kb": 64, 00:31:20.506 "state": "online", 00:31:20.506 "raid_level": "raid5f", 00:31:20.506 "superblock": false, 00:31:20.506 "num_base_bdevs": 3, 00:31:20.506 "num_base_bdevs_discovered": 3, 00:31:20.506 "num_base_bdevs_operational": 3, 00:31:20.506 "base_bdevs_list": [ 00:31:20.506 { 00:31:20.506 "name": "BaseBdev1", 00:31:20.506 "uuid": "62af0932-0831-4cab-8b40-f6c86ce7cc5d", 00:31:20.506 "is_configured": true, 00:31:20.506 "data_offset": 0, 00:31:20.506 "data_size": 65536 00:31:20.506 }, 00:31:20.506 { 00:31:20.506 "name": "BaseBdev2", 00:31:20.506 "uuid": "43832947-706c-4c5b-87f0-21bb3c471d62", 00:31:20.506 "is_configured": true, 00:31:20.506 "data_offset": 0, 00:31:20.506 "data_size": 65536 00:31:20.506 }, 00:31:20.506 { 00:31:20.506 "name": "BaseBdev3", 00:31:20.506 "uuid": "9ae31a7c-a020-4aab-8a84-f919202258b1", 00:31:20.506 "is_configured": true, 00:31:20.506 "data_offset": 0, 00:31:20.506 "data_size": 65536 00:31:20.506 } 00:31:20.506 ] 00:31:20.506 } 00:31:20.506 } 00:31:20.506 }' 00:31:20.506 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:20.506 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:20.506 BaseBdev2 00:31:20.506 BaseBdev3' 00:31:20.506 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:20.506 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:20.506 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:20.764 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:20.764 "name": "BaseBdev1", 00:31:20.764 "aliases": [ 00:31:20.764 "62af0932-0831-4cab-8b40-f6c86ce7cc5d" 00:31:20.764 ], 00:31:20.764 "product_name": "Malloc disk", 00:31:20.764 "block_size": 512, 00:31:20.764 "num_blocks": 65536, 00:31:20.764 "uuid": "62af0932-0831-4cab-8b40-f6c86ce7cc5d", 00:31:20.764 "assigned_rate_limits": { 00:31:20.764 "rw_ios_per_sec": 0, 00:31:20.764 "rw_mbytes_per_sec": 0, 00:31:20.764 "r_mbytes_per_sec": 0, 00:31:20.764 "w_mbytes_per_sec": 0 00:31:20.764 }, 00:31:20.764 "claimed": true, 00:31:20.764 "claim_type": "exclusive_write", 00:31:20.764 "zoned": false, 00:31:20.764 "supported_io_types": { 00:31:20.764 "read": true, 00:31:20.764 "write": true, 00:31:20.764 "unmap": true, 00:31:20.764 "flush": true, 00:31:20.764 "reset": true, 00:31:20.764 "nvme_admin": false, 00:31:20.764 "nvme_io": false, 00:31:20.764 "nvme_io_md": false, 00:31:20.764 "write_zeroes": true, 00:31:20.764 "zcopy": true, 00:31:20.764 "get_zone_info": false, 00:31:20.764 "zone_management": false, 00:31:20.764 "zone_append": false, 00:31:20.764 "compare": false, 00:31:20.764 "compare_and_write": false, 00:31:20.764 "abort": true, 00:31:20.764 "seek_hole": false, 00:31:20.764 "seek_data": false, 00:31:20.764 "copy": true, 00:31:20.764 "nvme_iov_md": false 00:31:20.764 }, 00:31:20.764 "memory_domains": [ 00:31:20.764 { 00:31:20.764 "dma_device_id": "system", 00:31:20.764 "dma_device_type": 1 00:31:20.764 }, 00:31:20.764 { 00:31:20.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:20.764 "dma_device_type": 2 00:31:20.764 } 00:31:20.764 ], 00:31:20.764 "driver_specific": {} 00:31:20.764 }' 00:31:20.764 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:20.764 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:20.764 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:20.764 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:21.022 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:21.022 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:21.022 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:21.023 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:21.023 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:21.023 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:21.023 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:21.280 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:21.280 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:21.280 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:21.280 11:42:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:21.539 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:21.539 "name": "BaseBdev2", 00:31:21.539 "aliases": [ 00:31:21.539 "43832947-706c-4c5b-87f0-21bb3c471d62" 00:31:21.539 ], 00:31:21.539 "product_name": "Malloc disk", 00:31:21.539 "block_size": 512, 00:31:21.539 "num_blocks": 65536, 00:31:21.539 "uuid": "43832947-706c-4c5b-87f0-21bb3c471d62", 00:31:21.539 "assigned_rate_limits": { 00:31:21.539 "rw_ios_per_sec": 0, 00:31:21.539 "rw_mbytes_per_sec": 0, 00:31:21.539 "r_mbytes_per_sec": 0, 00:31:21.539 "w_mbytes_per_sec": 0 00:31:21.539 }, 00:31:21.539 "claimed": true, 00:31:21.539 "claim_type": "exclusive_write", 00:31:21.539 "zoned": false, 00:31:21.539 "supported_io_types": { 00:31:21.539 "read": true, 00:31:21.539 "write": true, 00:31:21.539 "unmap": true, 00:31:21.539 "flush": true, 00:31:21.539 "reset": true, 00:31:21.539 "nvme_admin": false, 00:31:21.539 "nvme_io": false, 00:31:21.539 "nvme_io_md": false, 00:31:21.539 "write_zeroes": true, 00:31:21.539 "zcopy": true, 00:31:21.539 "get_zone_info": false, 00:31:21.539 "zone_management": false, 00:31:21.539 "zone_append": false, 00:31:21.539 "compare": false, 00:31:21.539 "compare_and_write": false, 00:31:21.539 "abort": true, 00:31:21.539 "seek_hole": false, 00:31:21.539 "seek_data": false, 00:31:21.539 "copy": true, 00:31:21.539 "nvme_iov_md": false 00:31:21.539 }, 00:31:21.539 "memory_domains": [ 00:31:21.539 { 00:31:21.539 "dma_device_id": "system", 00:31:21.539 "dma_device_type": 1 00:31:21.539 }, 00:31:21.539 { 00:31:21.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:21.539 "dma_device_type": 2 00:31:21.539 } 00:31:21.539 ], 00:31:21.539 "driver_specific": {} 00:31:21.539 }' 00:31:21.539 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:21.539 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:21.539 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:21.539 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:21.539 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:21.797 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:22.054 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:22.054 "name": "BaseBdev3", 00:31:22.054 "aliases": [ 00:31:22.054 "9ae31a7c-a020-4aab-8a84-f919202258b1" 00:31:22.054 ], 00:31:22.054 "product_name": "Malloc disk", 00:31:22.054 "block_size": 512, 00:31:22.054 "num_blocks": 65536, 00:31:22.054 "uuid": "9ae31a7c-a020-4aab-8a84-f919202258b1", 00:31:22.054 "assigned_rate_limits": { 00:31:22.054 "rw_ios_per_sec": 0, 00:31:22.054 "rw_mbytes_per_sec": 0, 00:31:22.054 "r_mbytes_per_sec": 0, 00:31:22.054 "w_mbytes_per_sec": 0 00:31:22.054 }, 00:31:22.055 "claimed": true, 00:31:22.055 "claim_type": "exclusive_write", 00:31:22.055 "zoned": false, 00:31:22.055 "supported_io_types": { 00:31:22.055 "read": true, 00:31:22.055 "write": true, 00:31:22.055 "unmap": true, 00:31:22.055 "flush": true, 00:31:22.055 "reset": true, 00:31:22.055 "nvme_admin": false, 00:31:22.055 "nvme_io": false, 00:31:22.055 "nvme_io_md": false, 00:31:22.055 "write_zeroes": true, 00:31:22.055 "zcopy": true, 00:31:22.055 "get_zone_info": false, 00:31:22.055 "zone_management": false, 00:31:22.055 "zone_append": false, 00:31:22.055 "compare": false, 00:31:22.055 "compare_and_write": false, 00:31:22.055 "abort": true, 00:31:22.055 "seek_hole": false, 00:31:22.055 "seek_data": false, 00:31:22.055 "copy": true, 00:31:22.055 "nvme_iov_md": false 00:31:22.055 }, 00:31:22.055 "memory_domains": [ 00:31:22.055 { 00:31:22.055 "dma_device_id": "system", 00:31:22.055 "dma_device_type": 1 00:31:22.055 }, 00:31:22.055 { 00:31:22.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:22.055 "dma_device_type": 2 00:31:22.055 } 00:31:22.055 ], 00:31:22.055 "driver_specific": {} 00:31:22.055 }' 00:31:22.055 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:22.312 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:22.312 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:22.312 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:22.312 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:22.312 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:22.312 11:42:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:22.312 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:22.571 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:22.571 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:22.571 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:22.571 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:22.571 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:22.829 [2024-07-13 11:42:57.444589] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.829 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:23.088 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:23.088 "name": "Existed_Raid", 00:31:23.088 "uuid": "72fda673-ebe4-49b2-b996-e25d315a82e4", 00:31:23.088 "strip_size_kb": 64, 00:31:23.088 "state": "online", 00:31:23.088 "raid_level": "raid5f", 00:31:23.088 "superblock": false, 00:31:23.088 "num_base_bdevs": 3, 00:31:23.088 "num_base_bdevs_discovered": 2, 00:31:23.088 "num_base_bdevs_operational": 2, 00:31:23.088 "base_bdevs_list": [ 00:31:23.088 { 00:31:23.088 "name": null, 00:31:23.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.088 "is_configured": false, 00:31:23.088 "data_offset": 0, 00:31:23.088 "data_size": 65536 00:31:23.088 }, 00:31:23.088 { 00:31:23.088 "name": "BaseBdev2", 00:31:23.088 "uuid": "43832947-706c-4c5b-87f0-21bb3c471d62", 00:31:23.088 "is_configured": true, 00:31:23.088 "data_offset": 0, 00:31:23.088 "data_size": 65536 00:31:23.088 }, 00:31:23.088 { 00:31:23.088 "name": "BaseBdev3", 00:31:23.088 "uuid": "9ae31a7c-a020-4aab-8a84-f919202258b1", 00:31:23.088 "is_configured": true, 00:31:23.088 "data_offset": 0, 00:31:23.088 "data_size": 65536 00:31:23.088 } 00:31:23.088 ] 00:31:23.088 }' 00:31:23.088 11:42:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:23.088 11:42:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.654 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:23.654 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:23.654 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.654 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:23.913 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:23.913 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:23.913 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:24.171 [2024-07-13 11:42:58.863351] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:24.171 [2024-07-13 11:42:58.863582] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:24.429 [2024-07-13 11:42:58.926540] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:24.429 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:24.429 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:24.429 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:24.429 11:42:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.687 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:24.687 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:24.687 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:31:24.687 [2024-07-13 11:42:59.414681] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:24.687 [2024-07-13 11:42:59.414862] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:24.946 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:25.205 BaseBdev2 00:31:25.205 11:42:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:31:25.205 11:42:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:25.205 11:42:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:25.205 11:42:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:25.205 11:42:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:25.205 11:42:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:25.205 11:42:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:25.464 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:25.723 [ 00:31:25.723 { 00:31:25.723 "name": "BaseBdev2", 00:31:25.723 "aliases": [ 00:31:25.723 "bb236260-0b53-4380-99fd-c4e5b23e3dc0" 00:31:25.723 ], 00:31:25.723 "product_name": "Malloc disk", 00:31:25.723 "block_size": 512, 00:31:25.723 "num_blocks": 65536, 00:31:25.723 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:25.723 "assigned_rate_limits": { 00:31:25.723 "rw_ios_per_sec": 0, 00:31:25.723 "rw_mbytes_per_sec": 0, 00:31:25.723 "r_mbytes_per_sec": 0, 00:31:25.723 "w_mbytes_per_sec": 0 00:31:25.723 }, 00:31:25.723 "claimed": false, 00:31:25.723 "zoned": false, 00:31:25.723 "supported_io_types": { 00:31:25.723 "read": true, 00:31:25.723 "write": true, 00:31:25.723 "unmap": true, 00:31:25.723 "flush": true, 00:31:25.723 "reset": true, 00:31:25.723 "nvme_admin": false, 00:31:25.723 "nvme_io": false, 00:31:25.723 "nvme_io_md": false, 00:31:25.723 "write_zeroes": true, 00:31:25.723 "zcopy": true, 00:31:25.723 "get_zone_info": false, 00:31:25.723 "zone_management": false, 00:31:25.723 "zone_append": false, 00:31:25.723 "compare": false, 00:31:25.723 "compare_and_write": false, 00:31:25.723 "abort": true, 00:31:25.723 "seek_hole": false, 00:31:25.723 "seek_data": false, 00:31:25.723 "copy": true, 00:31:25.723 "nvme_iov_md": false 00:31:25.723 }, 00:31:25.723 "memory_domains": [ 00:31:25.723 { 00:31:25.723 "dma_device_id": "system", 00:31:25.723 "dma_device_type": 1 00:31:25.723 }, 00:31:25.723 { 00:31:25.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:25.723 "dma_device_type": 2 00:31:25.723 } 00:31:25.723 ], 00:31:25.723 "driver_specific": {} 00:31:25.723 } 00:31:25.723 ] 00:31:25.723 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:25.723 11:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:25.723 11:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:25.723 11:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:25.983 BaseBdev3 00:31:25.983 11:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:31:25.983 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:31:25.983 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:25.983 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:25.983 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:25.983 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:25.983 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:25.983 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:26.241 [ 00:31:26.241 { 00:31:26.241 "name": "BaseBdev3", 00:31:26.241 "aliases": [ 00:31:26.241 "a21ab7da-d109-420b-b409-513013c5444a" 00:31:26.241 ], 00:31:26.241 "product_name": "Malloc disk", 00:31:26.241 "block_size": 512, 00:31:26.241 "num_blocks": 65536, 00:31:26.241 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:26.241 "assigned_rate_limits": { 00:31:26.241 "rw_ios_per_sec": 0, 00:31:26.241 "rw_mbytes_per_sec": 0, 00:31:26.241 "r_mbytes_per_sec": 0, 00:31:26.242 "w_mbytes_per_sec": 0 00:31:26.242 }, 00:31:26.242 "claimed": false, 00:31:26.242 "zoned": false, 00:31:26.242 "supported_io_types": { 00:31:26.242 "read": true, 00:31:26.242 "write": true, 00:31:26.242 "unmap": true, 00:31:26.242 "flush": true, 00:31:26.242 "reset": true, 00:31:26.242 "nvme_admin": false, 00:31:26.242 "nvme_io": false, 00:31:26.242 "nvme_io_md": false, 00:31:26.242 "write_zeroes": true, 00:31:26.242 "zcopy": true, 00:31:26.242 "get_zone_info": false, 00:31:26.242 "zone_management": false, 00:31:26.242 "zone_append": false, 00:31:26.242 "compare": false, 00:31:26.242 "compare_and_write": false, 00:31:26.242 "abort": true, 00:31:26.242 "seek_hole": false, 00:31:26.242 "seek_data": false, 00:31:26.242 "copy": true, 00:31:26.242 "nvme_iov_md": false 00:31:26.242 }, 00:31:26.242 "memory_domains": [ 00:31:26.242 { 00:31:26.242 "dma_device_id": "system", 00:31:26.242 "dma_device_type": 1 00:31:26.242 }, 00:31:26.242 { 00:31:26.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:26.242 "dma_device_type": 2 00:31:26.242 } 00:31:26.242 ], 00:31:26.242 "driver_specific": {} 00:31:26.242 } 00:31:26.242 ] 00:31:26.242 11:43:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:26.242 11:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:26.242 11:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:26.242 11:43:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:26.500 [2024-07-13 11:43:01.088196] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:26.500 [2024-07-13 11:43:01.088400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:26.500 [2024-07-13 11:43:01.088556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:26.500 [2024-07-13 11:43:01.090653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.500 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:26.759 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:26.759 "name": "Existed_Raid", 00:31:26.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.759 "strip_size_kb": 64, 00:31:26.759 "state": "configuring", 00:31:26.759 "raid_level": "raid5f", 00:31:26.759 "superblock": false, 00:31:26.759 "num_base_bdevs": 3, 00:31:26.759 "num_base_bdevs_discovered": 2, 00:31:26.759 "num_base_bdevs_operational": 3, 00:31:26.759 "base_bdevs_list": [ 00:31:26.759 { 00:31:26.759 "name": "BaseBdev1", 00:31:26.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.759 "is_configured": false, 00:31:26.759 "data_offset": 0, 00:31:26.759 "data_size": 0 00:31:26.759 }, 00:31:26.759 { 00:31:26.759 "name": "BaseBdev2", 00:31:26.759 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:26.759 "is_configured": true, 00:31:26.759 "data_offset": 0, 00:31:26.759 "data_size": 65536 00:31:26.759 }, 00:31:26.759 { 00:31:26.759 "name": "BaseBdev3", 00:31:26.759 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:26.759 "is_configured": true, 00:31:26.759 "data_offset": 0, 00:31:26.759 "data_size": 65536 00:31:26.759 } 00:31:26.759 ] 00:31:26.759 }' 00:31:26.759 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:26.759 11:43:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.325 11:43:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:27.583 [2024-07-13 11:43:02.128360] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.583 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:27.842 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:27.842 "name": "Existed_Raid", 00:31:27.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.842 "strip_size_kb": 64, 00:31:27.842 "state": "configuring", 00:31:27.842 "raid_level": "raid5f", 00:31:27.842 "superblock": false, 00:31:27.842 "num_base_bdevs": 3, 00:31:27.842 "num_base_bdevs_discovered": 1, 00:31:27.842 "num_base_bdevs_operational": 3, 00:31:27.842 "base_bdevs_list": [ 00:31:27.842 { 00:31:27.842 "name": "BaseBdev1", 00:31:27.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:27.842 "is_configured": false, 00:31:27.842 "data_offset": 0, 00:31:27.842 "data_size": 0 00:31:27.842 }, 00:31:27.842 { 00:31:27.842 "name": null, 00:31:27.842 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:27.842 "is_configured": false, 00:31:27.842 "data_offset": 0, 00:31:27.842 "data_size": 65536 00:31:27.842 }, 00:31:27.842 { 00:31:27.842 "name": "BaseBdev3", 00:31:27.842 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:27.842 "is_configured": true, 00:31:27.842 "data_offset": 0, 00:31:27.842 "data_size": 65536 00:31:27.842 } 00:31:27.842 ] 00:31:27.842 }' 00:31:27.842 11:43:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:27.842 11:43:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.409 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.409 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:28.666 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:31:28.666 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:28.666 [2024-07-13 11:43:03.400058] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:28.666 BaseBdev1 00:31:28.666 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:31:28.666 11:43:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:28.666 11:43:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:28.666 11:43:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:28.666 11:43:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:28.666 11:43:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:28.666 11:43:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:28.924 11:43:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:29.182 [ 00:31:29.182 { 00:31:29.183 "name": "BaseBdev1", 00:31:29.183 "aliases": [ 00:31:29.183 "feb758eb-3341-41c6-9876-b7f988e1e70c" 00:31:29.183 ], 00:31:29.183 "product_name": "Malloc disk", 00:31:29.183 "block_size": 512, 00:31:29.183 "num_blocks": 65536, 00:31:29.183 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:29.183 "assigned_rate_limits": { 00:31:29.183 "rw_ios_per_sec": 0, 00:31:29.183 "rw_mbytes_per_sec": 0, 00:31:29.183 "r_mbytes_per_sec": 0, 00:31:29.183 "w_mbytes_per_sec": 0 00:31:29.183 }, 00:31:29.183 "claimed": true, 00:31:29.183 "claim_type": "exclusive_write", 00:31:29.183 "zoned": false, 00:31:29.183 "supported_io_types": { 00:31:29.183 "read": true, 00:31:29.183 "write": true, 00:31:29.183 "unmap": true, 00:31:29.183 "flush": true, 00:31:29.183 "reset": true, 00:31:29.183 "nvme_admin": false, 00:31:29.183 "nvme_io": false, 00:31:29.183 "nvme_io_md": false, 00:31:29.183 "write_zeroes": true, 00:31:29.183 "zcopy": true, 00:31:29.183 "get_zone_info": false, 00:31:29.183 "zone_management": false, 00:31:29.183 "zone_append": false, 00:31:29.183 "compare": false, 00:31:29.183 "compare_and_write": false, 00:31:29.183 "abort": true, 00:31:29.183 "seek_hole": false, 00:31:29.183 "seek_data": false, 00:31:29.183 "copy": true, 00:31:29.183 "nvme_iov_md": false 00:31:29.183 }, 00:31:29.183 "memory_domains": [ 00:31:29.183 { 00:31:29.183 "dma_device_id": "system", 00:31:29.183 "dma_device_type": 1 00:31:29.183 }, 00:31:29.183 { 00:31:29.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:29.183 "dma_device_type": 2 00:31:29.183 } 00:31:29.183 ], 00:31:29.183 "driver_specific": {} 00:31:29.183 } 00:31:29.183 ] 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.183 11:43:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:29.442 11:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:29.442 "name": "Existed_Raid", 00:31:29.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:29.442 "strip_size_kb": 64, 00:31:29.442 "state": "configuring", 00:31:29.442 "raid_level": "raid5f", 00:31:29.442 "superblock": false, 00:31:29.442 "num_base_bdevs": 3, 00:31:29.442 "num_base_bdevs_discovered": 2, 00:31:29.442 "num_base_bdevs_operational": 3, 00:31:29.442 "base_bdevs_list": [ 00:31:29.442 { 00:31:29.442 "name": "BaseBdev1", 00:31:29.442 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:29.442 "is_configured": true, 00:31:29.442 "data_offset": 0, 00:31:29.442 "data_size": 65536 00:31:29.442 }, 00:31:29.442 { 00:31:29.442 "name": null, 00:31:29.442 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:29.442 "is_configured": false, 00:31:29.442 "data_offset": 0, 00:31:29.442 "data_size": 65536 00:31:29.442 }, 00:31:29.442 { 00:31:29.442 "name": "BaseBdev3", 00:31:29.442 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:29.442 "is_configured": true, 00:31:29.442 "data_offset": 0, 00:31:29.442 "data_size": 65536 00:31:29.442 } 00:31:29.442 ] 00:31:29.442 }' 00:31:29.442 11:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:29.442 11:43:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:30.377 11:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.377 11:43:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:30.377 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:31:30.377 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:31:30.634 [2024-07-13 11:43:05.324450] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.634 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:30.891 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:30.891 "name": "Existed_Raid", 00:31:30.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:30.891 "strip_size_kb": 64, 00:31:30.891 "state": "configuring", 00:31:30.891 "raid_level": "raid5f", 00:31:30.891 "superblock": false, 00:31:30.891 "num_base_bdevs": 3, 00:31:30.891 "num_base_bdevs_discovered": 1, 00:31:30.891 "num_base_bdevs_operational": 3, 00:31:30.891 "base_bdevs_list": [ 00:31:30.891 { 00:31:30.891 "name": "BaseBdev1", 00:31:30.891 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:30.891 "is_configured": true, 00:31:30.891 "data_offset": 0, 00:31:30.891 "data_size": 65536 00:31:30.891 }, 00:31:30.891 { 00:31:30.891 "name": null, 00:31:30.891 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:30.891 "is_configured": false, 00:31:30.891 "data_offset": 0, 00:31:30.891 "data_size": 65536 00:31:30.891 }, 00:31:30.891 { 00:31:30.891 "name": null, 00:31:30.891 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:30.891 "is_configured": false, 00:31:30.891 "data_offset": 0, 00:31:30.891 "data_size": 65536 00:31:30.891 } 00:31:30.891 ] 00:31:30.891 }' 00:31:30.891 11:43:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:30.891 11:43:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.457 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.457 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:31.715 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:31:31.715 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:31:31.973 [2024-07-13 11:43:06.604745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.973 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:32.231 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:32.231 "name": "Existed_Raid", 00:31:32.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.231 "strip_size_kb": 64, 00:31:32.231 "state": "configuring", 00:31:32.231 "raid_level": "raid5f", 00:31:32.231 "superblock": false, 00:31:32.231 "num_base_bdevs": 3, 00:31:32.231 "num_base_bdevs_discovered": 2, 00:31:32.231 "num_base_bdevs_operational": 3, 00:31:32.231 "base_bdevs_list": [ 00:31:32.231 { 00:31:32.231 "name": "BaseBdev1", 00:31:32.231 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:32.231 "is_configured": true, 00:31:32.231 "data_offset": 0, 00:31:32.231 "data_size": 65536 00:31:32.231 }, 00:31:32.231 { 00:31:32.231 "name": null, 00:31:32.231 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:32.231 "is_configured": false, 00:31:32.231 "data_offset": 0, 00:31:32.231 "data_size": 65536 00:31:32.231 }, 00:31:32.231 { 00:31:32.231 "name": "BaseBdev3", 00:31:32.231 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:32.231 "is_configured": true, 00:31:32.231 "data_offset": 0, 00:31:32.231 "data_size": 65536 00:31:32.231 } 00:31:32.231 ] 00:31:32.231 }' 00:31:32.231 11:43:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:32.231 11:43:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:32.797 11:43:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.797 11:43:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:31:33.055 11:43:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:31:33.055 11:43:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:33.314 [2024-07-13 11:43:08.025070] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:33.573 "name": "Existed_Raid", 00:31:33.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:33.573 "strip_size_kb": 64, 00:31:33.573 "state": "configuring", 00:31:33.573 "raid_level": "raid5f", 00:31:33.573 "superblock": false, 00:31:33.573 "num_base_bdevs": 3, 00:31:33.573 "num_base_bdevs_discovered": 1, 00:31:33.573 "num_base_bdevs_operational": 3, 00:31:33.573 "base_bdevs_list": [ 00:31:33.573 { 00:31:33.573 "name": null, 00:31:33.573 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:33.573 "is_configured": false, 00:31:33.573 "data_offset": 0, 00:31:33.573 "data_size": 65536 00:31:33.573 }, 00:31:33.573 { 00:31:33.573 "name": null, 00:31:33.573 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:33.573 "is_configured": false, 00:31:33.573 "data_offset": 0, 00:31:33.573 "data_size": 65536 00:31:33.573 }, 00:31:33.573 { 00:31:33.573 "name": "BaseBdev3", 00:31:33.573 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:33.573 "is_configured": true, 00:31:33.573 "data_offset": 0, 00:31:33.573 "data_size": 65536 00:31:33.573 } 00:31:33.573 ] 00:31:33.573 }' 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:33.573 11:43:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.509 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.509 11:43:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:34.509 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:31:34.509 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:31:34.767 [2024-07-13 11:43:09.436706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:34.767 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:34.767 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:34.767 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:34.768 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:34.768 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:34.768 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:34.768 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:34.768 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:34.768 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:34.768 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:34.768 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.768 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:35.026 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:35.026 "name": "Existed_Raid", 00:31:35.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.026 "strip_size_kb": 64, 00:31:35.026 "state": "configuring", 00:31:35.026 "raid_level": "raid5f", 00:31:35.026 "superblock": false, 00:31:35.026 "num_base_bdevs": 3, 00:31:35.026 "num_base_bdevs_discovered": 2, 00:31:35.026 "num_base_bdevs_operational": 3, 00:31:35.026 "base_bdevs_list": [ 00:31:35.026 { 00:31:35.026 "name": null, 00:31:35.026 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:35.026 "is_configured": false, 00:31:35.026 "data_offset": 0, 00:31:35.026 "data_size": 65536 00:31:35.026 }, 00:31:35.026 { 00:31:35.026 "name": "BaseBdev2", 00:31:35.026 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:35.026 "is_configured": true, 00:31:35.026 "data_offset": 0, 00:31:35.026 "data_size": 65536 00:31:35.026 }, 00:31:35.026 { 00:31:35.026 "name": "BaseBdev3", 00:31:35.026 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:35.026 "is_configured": true, 00:31:35.026 "data_offset": 0, 00:31:35.026 "data_size": 65536 00:31:35.026 } 00:31:35.026 ] 00:31:35.026 }' 00:31:35.026 11:43:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:35.026 11:43:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:35.613 11:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:35.613 11:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.871 11:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:31:35.871 11:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.871 11:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:31:36.129 11:43:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u feb758eb-3341-41c6-9876-b7f988e1e70c 00:31:36.388 [2024-07-13 11:43:11.066461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:31:36.388 [2024-07-13 11:43:11.066654] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:31:36.388 [2024-07-13 11:43:11.066692] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:36.388 [2024-07-13 11:43:11.066947] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:36.388 [2024-07-13 11:43:11.071134] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:31:36.388 [2024-07-13 11:43:11.071298] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:31:36.388 [2024-07-13 11:43:11.071628] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:36.388 NewBaseBdev 00:31:36.388 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:31:36.388 11:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:31:36.388 11:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:36.388 11:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:31:36.388 11:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:36.388 11:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:36.388 11:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:36.688 11:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:31:36.966 [ 00:31:36.966 { 00:31:36.966 "name": "NewBaseBdev", 00:31:36.966 "aliases": [ 00:31:36.966 "feb758eb-3341-41c6-9876-b7f988e1e70c" 00:31:36.966 ], 00:31:36.966 "product_name": "Malloc disk", 00:31:36.966 "block_size": 512, 00:31:36.966 "num_blocks": 65536, 00:31:36.966 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:36.966 "assigned_rate_limits": { 00:31:36.966 "rw_ios_per_sec": 0, 00:31:36.966 "rw_mbytes_per_sec": 0, 00:31:36.966 "r_mbytes_per_sec": 0, 00:31:36.966 "w_mbytes_per_sec": 0 00:31:36.966 }, 00:31:36.966 "claimed": true, 00:31:36.966 "claim_type": "exclusive_write", 00:31:36.966 "zoned": false, 00:31:36.966 "supported_io_types": { 00:31:36.966 "read": true, 00:31:36.966 "write": true, 00:31:36.966 "unmap": true, 00:31:36.966 "flush": true, 00:31:36.966 "reset": true, 00:31:36.966 "nvme_admin": false, 00:31:36.966 "nvme_io": false, 00:31:36.966 "nvme_io_md": false, 00:31:36.966 "write_zeroes": true, 00:31:36.966 "zcopy": true, 00:31:36.966 "get_zone_info": false, 00:31:36.966 "zone_management": false, 00:31:36.966 "zone_append": false, 00:31:36.966 "compare": false, 00:31:36.966 "compare_and_write": false, 00:31:36.966 "abort": true, 00:31:36.966 "seek_hole": false, 00:31:36.966 "seek_data": false, 00:31:36.966 "copy": true, 00:31:36.966 "nvme_iov_md": false 00:31:36.966 }, 00:31:36.966 "memory_domains": [ 00:31:36.966 { 00:31:36.966 "dma_device_id": "system", 00:31:36.966 "dma_device_type": 1 00:31:36.966 }, 00:31:36.966 { 00:31:36.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:36.966 "dma_device_type": 2 00:31:36.966 } 00:31:36.966 ], 00:31:36.966 "driver_specific": {} 00:31:36.966 } 00:31:36.966 ] 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.966 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:37.226 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:37.226 "name": "Existed_Raid", 00:31:37.226 "uuid": "91b7ab21-a8bf-4d43-a6ea-ecd37c10bf1e", 00:31:37.226 "strip_size_kb": 64, 00:31:37.226 "state": "online", 00:31:37.226 "raid_level": "raid5f", 00:31:37.226 "superblock": false, 00:31:37.226 "num_base_bdevs": 3, 00:31:37.226 "num_base_bdevs_discovered": 3, 00:31:37.226 "num_base_bdevs_operational": 3, 00:31:37.226 "base_bdevs_list": [ 00:31:37.226 { 00:31:37.226 "name": "NewBaseBdev", 00:31:37.226 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:37.226 "is_configured": true, 00:31:37.226 "data_offset": 0, 00:31:37.226 "data_size": 65536 00:31:37.226 }, 00:31:37.226 { 00:31:37.226 "name": "BaseBdev2", 00:31:37.226 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:37.226 "is_configured": true, 00:31:37.226 "data_offset": 0, 00:31:37.226 "data_size": 65536 00:31:37.226 }, 00:31:37.226 { 00:31:37.226 "name": "BaseBdev3", 00:31:37.226 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:37.226 "is_configured": true, 00:31:37.226 "data_offset": 0, 00:31:37.226 "data_size": 65536 00:31:37.226 } 00:31:37.226 ] 00:31:37.226 }' 00:31:37.226 11:43:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:37.226 11:43:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:37.791 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:31:37.791 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:37.791 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:37.791 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:37.791 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:37.791 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:37.791 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:37.791 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:38.049 [2024-07-13 11:43:12.564912] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:38.049 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:38.049 "name": "Existed_Raid", 00:31:38.049 "aliases": [ 00:31:38.049 "91b7ab21-a8bf-4d43-a6ea-ecd37c10bf1e" 00:31:38.049 ], 00:31:38.049 "product_name": "Raid Volume", 00:31:38.049 "block_size": 512, 00:31:38.049 "num_blocks": 131072, 00:31:38.049 "uuid": "91b7ab21-a8bf-4d43-a6ea-ecd37c10bf1e", 00:31:38.049 "assigned_rate_limits": { 00:31:38.049 "rw_ios_per_sec": 0, 00:31:38.049 "rw_mbytes_per_sec": 0, 00:31:38.049 "r_mbytes_per_sec": 0, 00:31:38.049 "w_mbytes_per_sec": 0 00:31:38.049 }, 00:31:38.049 "claimed": false, 00:31:38.049 "zoned": false, 00:31:38.049 "supported_io_types": { 00:31:38.049 "read": true, 00:31:38.049 "write": true, 00:31:38.049 "unmap": false, 00:31:38.049 "flush": false, 00:31:38.049 "reset": true, 00:31:38.049 "nvme_admin": false, 00:31:38.049 "nvme_io": false, 00:31:38.049 "nvme_io_md": false, 00:31:38.049 "write_zeroes": true, 00:31:38.049 "zcopy": false, 00:31:38.049 "get_zone_info": false, 00:31:38.049 "zone_management": false, 00:31:38.049 "zone_append": false, 00:31:38.049 "compare": false, 00:31:38.049 "compare_and_write": false, 00:31:38.049 "abort": false, 00:31:38.049 "seek_hole": false, 00:31:38.049 "seek_data": false, 00:31:38.049 "copy": false, 00:31:38.049 "nvme_iov_md": false 00:31:38.049 }, 00:31:38.049 "driver_specific": { 00:31:38.049 "raid": { 00:31:38.049 "uuid": "91b7ab21-a8bf-4d43-a6ea-ecd37c10bf1e", 00:31:38.049 "strip_size_kb": 64, 00:31:38.049 "state": "online", 00:31:38.049 "raid_level": "raid5f", 00:31:38.049 "superblock": false, 00:31:38.049 "num_base_bdevs": 3, 00:31:38.049 "num_base_bdevs_discovered": 3, 00:31:38.049 "num_base_bdevs_operational": 3, 00:31:38.049 "base_bdevs_list": [ 00:31:38.049 { 00:31:38.049 "name": "NewBaseBdev", 00:31:38.049 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:38.049 "is_configured": true, 00:31:38.049 "data_offset": 0, 00:31:38.049 "data_size": 65536 00:31:38.049 }, 00:31:38.049 { 00:31:38.049 "name": "BaseBdev2", 00:31:38.049 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:38.049 "is_configured": true, 00:31:38.049 "data_offset": 0, 00:31:38.049 "data_size": 65536 00:31:38.049 }, 00:31:38.049 { 00:31:38.049 "name": "BaseBdev3", 00:31:38.049 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:38.049 "is_configured": true, 00:31:38.049 "data_offset": 0, 00:31:38.049 "data_size": 65536 00:31:38.049 } 00:31:38.049 ] 00:31:38.049 } 00:31:38.049 } 00:31:38.049 }' 00:31:38.049 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:38.049 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:31:38.049 BaseBdev2 00:31:38.049 BaseBdev3' 00:31:38.049 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:38.049 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:31:38.049 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:38.306 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:38.306 "name": "NewBaseBdev", 00:31:38.306 "aliases": [ 00:31:38.306 "feb758eb-3341-41c6-9876-b7f988e1e70c" 00:31:38.306 ], 00:31:38.306 "product_name": "Malloc disk", 00:31:38.306 "block_size": 512, 00:31:38.306 "num_blocks": 65536, 00:31:38.306 "uuid": "feb758eb-3341-41c6-9876-b7f988e1e70c", 00:31:38.306 "assigned_rate_limits": { 00:31:38.306 "rw_ios_per_sec": 0, 00:31:38.306 "rw_mbytes_per_sec": 0, 00:31:38.306 "r_mbytes_per_sec": 0, 00:31:38.306 "w_mbytes_per_sec": 0 00:31:38.306 }, 00:31:38.306 "claimed": true, 00:31:38.306 "claim_type": "exclusive_write", 00:31:38.306 "zoned": false, 00:31:38.306 "supported_io_types": { 00:31:38.306 "read": true, 00:31:38.306 "write": true, 00:31:38.306 "unmap": true, 00:31:38.306 "flush": true, 00:31:38.306 "reset": true, 00:31:38.306 "nvme_admin": false, 00:31:38.306 "nvme_io": false, 00:31:38.306 "nvme_io_md": false, 00:31:38.306 "write_zeroes": true, 00:31:38.306 "zcopy": true, 00:31:38.306 "get_zone_info": false, 00:31:38.306 "zone_management": false, 00:31:38.306 "zone_append": false, 00:31:38.306 "compare": false, 00:31:38.306 "compare_and_write": false, 00:31:38.306 "abort": true, 00:31:38.306 "seek_hole": false, 00:31:38.306 "seek_data": false, 00:31:38.306 "copy": true, 00:31:38.306 "nvme_iov_md": false 00:31:38.306 }, 00:31:38.306 "memory_domains": [ 00:31:38.306 { 00:31:38.306 "dma_device_id": "system", 00:31:38.306 "dma_device_type": 1 00:31:38.306 }, 00:31:38.306 { 00:31:38.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:38.307 "dma_device_type": 2 00:31:38.307 } 00:31:38.307 ], 00:31:38.307 "driver_specific": {} 00:31:38.307 }' 00:31:38.307 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:38.307 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:38.307 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:38.307 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:38.307 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:38.307 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:38.307 11:43:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:38.307 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:38.564 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:38.564 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:38.564 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:38.564 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:38.564 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:38.564 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:38.564 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:38.823 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:38.823 "name": "BaseBdev2", 00:31:38.823 "aliases": [ 00:31:38.823 "bb236260-0b53-4380-99fd-c4e5b23e3dc0" 00:31:38.823 ], 00:31:38.823 "product_name": "Malloc disk", 00:31:38.823 "block_size": 512, 00:31:38.823 "num_blocks": 65536, 00:31:38.823 "uuid": "bb236260-0b53-4380-99fd-c4e5b23e3dc0", 00:31:38.823 "assigned_rate_limits": { 00:31:38.823 "rw_ios_per_sec": 0, 00:31:38.823 "rw_mbytes_per_sec": 0, 00:31:38.823 "r_mbytes_per_sec": 0, 00:31:38.823 "w_mbytes_per_sec": 0 00:31:38.823 }, 00:31:38.823 "claimed": true, 00:31:38.823 "claim_type": "exclusive_write", 00:31:38.823 "zoned": false, 00:31:38.823 "supported_io_types": { 00:31:38.823 "read": true, 00:31:38.823 "write": true, 00:31:38.823 "unmap": true, 00:31:38.823 "flush": true, 00:31:38.823 "reset": true, 00:31:38.823 "nvme_admin": false, 00:31:38.823 "nvme_io": false, 00:31:38.823 "nvme_io_md": false, 00:31:38.823 "write_zeroes": true, 00:31:38.823 "zcopy": true, 00:31:38.823 "get_zone_info": false, 00:31:38.823 "zone_management": false, 00:31:38.823 "zone_append": false, 00:31:38.823 "compare": false, 00:31:38.823 "compare_and_write": false, 00:31:38.823 "abort": true, 00:31:38.823 "seek_hole": false, 00:31:38.823 "seek_data": false, 00:31:38.823 "copy": true, 00:31:38.823 "nvme_iov_md": false 00:31:38.823 }, 00:31:38.823 "memory_domains": [ 00:31:38.823 { 00:31:38.823 "dma_device_id": "system", 00:31:38.823 "dma_device_type": 1 00:31:38.823 }, 00:31:38.823 { 00:31:38.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:38.823 "dma_device_type": 2 00:31:38.823 } 00:31:38.823 ], 00:31:38.823 "driver_specific": {} 00:31:38.823 }' 00:31:38.823 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:38.823 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:38.823 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:38.823 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:38.823 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:39.081 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:39.339 11:43:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:39.339 "name": "BaseBdev3", 00:31:39.339 "aliases": [ 00:31:39.339 "a21ab7da-d109-420b-b409-513013c5444a" 00:31:39.339 ], 00:31:39.339 "product_name": "Malloc disk", 00:31:39.339 "block_size": 512, 00:31:39.339 "num_blocks": 65536, 00:31:39.339 "uuid": "a21ab7da-d109-420b-b409-513013c5444a", 00:31:39.339 "assigned_rate_limits": { 00:31:39.339 "rw_ios_per_sec": 0, 00:31:39.339 "rw_mbytes_per_sec": 0, 00:31:39.339 "r_mbytes_per_sec": 0, 00:31:39.339 "w_mbytes_per_sec": 0 00:31:39.339 }, 00:31:39.339 "claimed": true, 00:31:39.339 "claim_type": "exclusive_write", 00:31:39.339 "zoned": false, 00:31:39.339 "supported_io_types": { 00:31:39.339 "read": true, 00:31:39.339 "write": true, 00:31:39.339 "unmap": true, 00:31:39.339 "flush": true, 00:31:39.339 "reset": true, 00:31:39.339 "nvme_admin": false, 00:31:39.339 "nvme_io": false, 00:31:39.339 "nvme_io_md": false, 00:31:39.339 "write_zeroes": true, 00:31:39.339 "zcopy": true, 00:31:39.339 "get_zone_info": false, 00:31:39.339 "zone_management": false, 00:31:39.339 "zone_append": false, 00:31:39.339 "compare": false, 00:31:39.339 "compare_and_write": false, 00:31:39.339 "abort": true, 00:31:39.339 "seek_hole": false, 00:31:39.339 "seek_data": false, 00:31:39.339 "copy": true, 00:31:39.339 "nvme_iov_md": false 00:31:39.339 }, 00:31:39.339 "memory_domains": [ 00:31:39.339 { 00:31:39.339 "dma_device_id": "system", 00:31:39.339 "dma_device_type": 1 00:31:39.339 }, 00:31:39.339 { 00:31:39.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:39.339 "dma_device_type": 2 00:31:39.339 } 00:31:39.339 ], 00:31:39.339 "driver_specific": {} 00:31:39.339 }' 00:31:39.339 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:39.339 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:39.597 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:39.597 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:39.597 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:39.597 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:39.597 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:39.597 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:39.597 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:39.597 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:39.854 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:39.854 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:39.854 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:40.112 [2024-07-13 11:43:14.665160] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:40.113 [2024-07-13 11:43:14.665318] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:40.113 [2024-07-13 11:43:14.665489] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:40.113 [2024-07-13 11:43:14.665849] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:40.113 [2024-07-13 11:43:14.665985] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 150957 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 150957 ']' 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 150957 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 150957 00:31:40.113 killing process with pid 150957 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 150957' 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 150957 00:31:40.113 11:43:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 150957 00:31:40.113 [2024-07-13 11:43:14.695780] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:40.371 [2024-07-13 11:43:14.883525] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:41.307 ************************************ 00:31:41.307 END TEST raid5f_state_function_test 00:31:41.307 ************************************ 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:31:41.307 00:31:41.307 real 0m29.477s 00:31:41.307 user 0m55.529s 00:31:41.307 sys 0m3.113s 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:31:41.307 11:43:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:41.307 11:43:15 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:31:41.307 11:43:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:41.307 11:43:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:41.307 11:43:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:41.307 ************************************ 00:31:41.307 START TEST raid5f_state_function_test_sb 00:31:41.307 ************************************ 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 3 true 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=151976 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 151976' 00:31:41.307 Process raid pid: 151976 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 151976 /var/tmp/spdk-raid.sock 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 151976 ']' 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:41.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:41.307 11:43:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.307 [2024-07-13 11:43:15.926726] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:31:41.307 [2024-07-13 11:43:15.927129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:41.566 [2024-07-13 11:43:16.100900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.566 [2024-07-13 11:43:16.315453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.825 [2024-07-13 11:43:16.505715] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:42.083 11:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:42.083 11:43:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:31:42.083 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:42.341 [2024-07-13 11:43:16.979540] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:42.341 [2024-07-13 11:43:16.979729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:42.341 [2024-07-13 11:43:16.979832] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:42.341 [2024-07-13 11:43:16.979889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:42.341 [2024-07-13 11:43:16.979972] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:42.341 [2024-07-13 11:43:16.980022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.341 11:43:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:42.599 11:43:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:42.599 "name": "Existed_Raid", 00:31:42.599 "uuid": "c6f5185f-a18c-4d70-9b47-7c42ed1a2663", 00:31:42.599 "strip_size_kb": 64, 00:31:42.599 "state": "configuring", 00:31:42.599 "raid_level": "raid5f", 00:31:42.599 "superblock": true, 00:31:42.599 "num_base_bdevs": 3, 00:31:42.599 "num_base_bdevs_discovered": 0, 00:31:42.599 "num_base_bdevs_operational": 3, 00:31:42.599 "base_bdevs_list": [ 00:31:42.599 { 00:31:42.599 "name": "BaseBdev1", 00:31:42.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.599 "is_configured": false, 00:31:42.599 "data_offset": 0, 00:31:42.599 "data_size": 0 00:31:42.599 }, 00:31:42.599 { 00:31:42.599 "name": "BaseBdev2", 00:31:42.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.599 "is_configured": false, 00:31:42.599 "data_offset": 0, 00:31:42.599 "data_size": 0 00:31:42.599 }, 00:31:42.599 { 00:31:42.599 "name": "BaseBdev3", 00:31:42.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:42.599 "is_configured": false, 00:31:42.599 "data_offset": 0, 00:31:42.599 "data_size": 0 00:31:42.599 } 00:31:42.599 ] 00:31:42.599 }' 00:31:42.599 11:43:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:42.599 11:43:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:43.166 11:43:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:43.425 [2024-07-13 11:43:18.088323] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:43.425 [2024-07-13 11:43:18.088467] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:31:43.425 11:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:43.683 [2024-07-13 11:43:18.292386] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:43.683 [2024-07-13 11:43:18.292549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:43.683 [2024-07-13 11:43:18.292640] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:43.683 [2024-07-13 11:43:18.292692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:43.683 [2024-07-13 11:43:18.292876] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:43.683 [2024-07-13 11:43:18.292948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:43.683 11:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:43.941 [2024-07-13 11:43:18.522089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:43.941 BaseBdev1 00:31:43.941 11:43:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:43.941 11:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:43.941 11:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:43.941 11:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:31:43.941 11:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:43.941 11:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:43.941 11:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:44.200 11:43:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:44.458 [ 00:31:44.458 { 00:31:44.458 "name": "BaseBdev1", 00:31:44.458 "aliases": [ 00:31:44.458 "171f1849-c30d-4571-a754-de3c809a1db1" 00:31:44.458 ], 00:31:44.458 "product_name": "Malloc disk", 00:31:44.458 "block_size": 512, 00:31:44.458 "num_blocks": 65536, 00:31:44.458 "uuid": "171f1849-c30d-4571-a754-de3c809a1db1", 00:31:44.458 "assigned_rate_limits": { 00:31:44.458 "rw_ios_per_sec": 0, 00:31:44.458 "rw_mbytes_per_sec": 0, 00:31:44.458 "r_mbytes_per_sec": 0, 00:31:44.458 "w_mbytes_per_sec": 0 00:31:44.458 }, 00:31:44.458 "claimed": true, 00:31:44.458 "claim_type": "exclusive_write", 00:31:44.458 "zoned": false, 00:31:44.458 "supported_io_types": { 00:31:44.458 "read": true, 00:31:44.458 "write": true, 00:31:44.458 "unmap": true, 00:31:44.458 "flush": true, 00:31:44.458 "reset": true, 00:31:44.458 "nvme_admin": false, 00:31:44.458 "nvme_io": false, 00:31:44.458 "nvme_io_md": false, 00:31:44.458 "write_zeroes": true, 00:31:44.458 "zcopy": true, 00:31:44.458 "get_zone_info": false, 00:31:44.458 "zone_management": false, 00:31:44.458 "zone_append": false, 00:31:44.458 "compare": false, 00:31:44.458 "compare_and_write": false, 00:31:44.458 "abort": true, 00:31:44.458 "seek_hole": false, 00:31:44.458 "seek_data": false, 00:31:44.458 "copy": true, 00:31:44.458 "nvme_iov_md": false 00:31:44.458 }, 00:31:44.458 "memory_domains": [ 00:31:44.458 { 00:31:44.458 "dma_device_id": "system", 00:31:44.458 "dma_device_type": 1 00:31:44.458 }, 00:31:44.458 { 00:31:44.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:44.458 "dma_device_type": 2 00:31:44.458 } 00:31:44.458 ], 00:31:44.458 "driver_specific": {} 00:31:44.458 } 00:31:44.458 ] 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:44.459 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:44.717 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:44.717 "name": "Existed_Raid", 00:31:44.717 "uuid": "465b9fbf-6661-41cc-86a6-d3b1b6251744", 00:31:44.717 "strip_size_kb": 64, 00:31:44.717 "state": "configuring", 00:31:44.717 "raid_level": "raid5f", 00:31:44.717 "superblock": true, 00:31:44.717 "num_base_bdevs": 3, 00:31:44.717 "num_base_bdevs_discovered": 1, 00:31:44.717 "num_base_bdevs_operational": 3, 00:31:44.717 "base_bdevs_list": [ 00:31:44.717 { 00:31:44.717 "name": "BaseBdev1", 00:31:44.717 "uuid": "171f1849-c30d-4571-a754-de3c809a1db1", 00:31:44.717 "is_configured": true, 00:31:44.717 "data_offset": 2048, 00:31:44.717 "data_size": 63488 00:31:44.717 }, 00:31:44.717 { 00:31:44.717 "name": "BaseBdev2", 00:31:44.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.717 "is_configured": false, 00:31:44.717 "data_offset": 0, 00:31:44.717 "data_size": 0 00:31:44.717 }, 00:31:44.717 { 00:31:44.717 "name": "BaseBdev3", 00:31:44.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:44.717 "is_configured": false, 00:31:44.717 "data_offset": 0, 00:31:44.717 "data_size": 0 00:31:44.717 } 00:31:44.718 ] 00:31:44.718 }' 00:31:44.718 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:44.718 11:43:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.284 11:43:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:45.543 [2024-07-13 11:43:20.106414] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:45.543 [2024-07-13 11:43:20.106580] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:31:45.543 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:45.802 [2024-07-13 11:43:20.298489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:45.802 [2024-07-13 11:43:20.300567] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:45.802 [2024-07-13 11:43:20.300772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:45.802 [2024-07-13 11:43:20.300900] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:31:45.802 [2024-07-13 11:43:20.301037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:45.802 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:45.803 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.803 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:45.803 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:45.803 "name": "Existed_Raid", 00:31:45.803 "uuid": "2dacc7ce-35c2-49d4-81b6-98589bd9e086", 00:31:45.803 "strip_size_kb": 64, 00:31:45.803 "state": "configuring", 00:31:45.803 "raid_level": "raid5f", 00:31:45.803 "superblock": true, 00:31:45.803 "num_base_bdevs": 3, 00:31:45.803 "num_base_bdevs_discovered": 1, 00:31:45.803 "num_base_bdevs_operational": 3, 00:31:45.803 "base_bdevs_list": [ 00:31:45.803 { 00:31:45.803 "name": "BaseBdev1", 00:31:45.803 "uuid": "171f1849-c30d-4571-a754-de3c809a1db1", 00:31:45.803 "is_configured": true, 00:31:45.803 "data_offset": 2048, 00:31:45.803 "data_size": 63488 00:31:45.803 }, 00:31:45.803 { 00:31:45.803 "name": "BaseBdev2", 00:31:45.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.803 "is_configured": false, 00:31:45.803 "data_offset": 0, 00:31:45.803 "data_size": 0 00:31:45.803 }, 00:31:45.803 { 00:31:45.803 "name": "BaseBdev3", 00:31:45.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:45.803 "is_configured": false, 00:31:45.803 "data_offset": 0, 00:31:45.803 "data_size": 0 00:31:45.803 } 00:31:45.803 ] 00:31:45.803 }' 00:31:45.803 11:43:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:45.803 11:43:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.739 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:46.739 [2024-07-13 11:43:21.422079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:46.739 BaseBdev2 00:31:46.739 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:46.739 11:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:46.739 11:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:46.739 11:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:31:46.739 11:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:46.739 11:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:46.739 11:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:46.998 11:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:47.256 [ 00:31:47.256 { 00:31:47.256 "name": "BaseBdev2", 00:31:47.257 "aliases": [ 00:31:47.257 "14234821-b893-439a-a833-d16e504abc3e" 00:31:47.257 ], 00:31:47.257 "product_name": "Malloc disk", 00:31:47.257 "block_size": 512, 00:31:47.257 "num_blocks": 65536, 00:31:47.257 "uuid": "14234821-b893-439a-a833-d16e504abc3e", 00:31:47.257 "assigned_rate_limits": { 00:31:47.257 "rw_ios_per_sec": 0, 00:31:47.257 "rw_mbytes_per_sec": 0, 00:31:47.257 "r_mbytes_per_sec": 0, 00:31:47.257 "w_mbytes_per_sec": 0 00:31:47.257 }, 00:31:47.257 "claimed": true, 00:31:47.257 "claim_type": "exclusive_write", 00:31:47.257 "zoned": false, 00:31:47.257 "supported_io_types": { 00:31:47.257 "read": true, 00:31:47.257 "write": true, 00:31:47.257 "unmap": true, 00:31:47.257 "flush": true, 00:31:47.257 "reset": true, 00:31:47.257 "nvme_admin": false, 00:31:47.257 "nvme_io": false, 00:31:47.257 "nvme_io_md": false, 00:31:47.257 "write_zeroes": true, 00:31:47.257 "zcopy": true, 00:31:47.257 "get_zone_info": false, 00:31:47.257 "zone_management": false, 00:31:47.257 "zone_append": false, 00:31:47.257 "compare": false, 00:31:47.257 "compare_and_write": false, 00:31:47.257 "abort": true, 00:31:47.257 "seek_hole": false, 00:31:47.257 "seek_data": false, 00:31:47.257 "copy": true, 00:31:47.257 "nvme_iov_md": false 00:31:47.257 }, 00:31:47.257 "memory_domains": [ 00:31:47.257 { 00:31:47.257 "dma_device_id": "system", 00:31:47.257 "dma_device_type": 1 00:31:47.257 }, 00:31:47.257 { 00:31:47.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.257 "dma_device_type": 2 00:31:47.257 } 00:31:47.257 ], 00:31:47.257 "driver_specific": {} 00:31:47.257 } 00:31:47.257 ] 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:47.257 11:43:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.515 11:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:47.515 "name": "Existed_Raid", 00:31:47.515 "uuid": "2dacc7ce-35c2-49d4-81b6-98589bd9e086", 00:31:47.515 "strip_size_kb": 64, 00:31:47.515 "state": "configuring", 00:31:47.515 "raid_level": "raid5f", 00:31:47.515 "superblock": true, 00:31:47.515 "num_base_bdevs": 3, 00:31:47.515 "num_base_bdevs_discovered": 2, 00:31:47.515 "num_base_bdevs_operational": 3, 00:31:47.515 "base_bdevs_list": [ 00:31:47.515 { 00:31:47.515 "name": "BaseBdev1", 00:31:47.515 "uuid": "171f1849-c30d-4571-a754-de3c809a1db1", 00:31:47.515 "is_configured": true, 00:31:47.515 "data_offset": 2048, 00:31:47.515 "data_size": 63488 00:31:47.515 }, 00:31:47.515 { 00:31:47.515 "name": "BaseBdev2", 00:31:47.515 "uuid": "14234821-b893-439a-a833-d16e504abc3e", 00:31:47.515 "is_configured": true, 00:31:47.515 "data_offset": 2048, 00:31:47.515 "data_size": 63488 00:31:47.515 }, 00:31:47.515 { 00:31:47.515 "name": "BaseBdev3", 00:31:47.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.515 "is_configured": false, 00:31:47.515 "data_offset": 0, 00:31:47.515 "data_size": 0 00:31:47.515 } 00:31:47.515 ] 00:31:47.515 }' 00:31:47.515 11:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:47.515 11:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.082 11:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:48.340 [2024-07-13 11:43:22.897989] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:48.340 [2024-07-13 11:43:22.898411] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:31:48.340 [2024-07-13 11:43:22.898533] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:48.340 BaseBdev3 00:31:48.340 [2024-07-13 11:43:22.898693] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:31:48.340 [2024-07-13 11:43:22.903362] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:31:48.340 [2024-07-13 11:43:22.903530] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:31:48.340 [2024-07-13 11:43:22.903807] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:48.340 11:43:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:31:48.340 11:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:31:48.340 11:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:48.340 11:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:31:48.340 11:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:48.340 11:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:48.340 11:43:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:48.597 11:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:48.855 [ 00:31:48.855 { 00:31:48.855 "name": "BaseBdev3", 00:31:48.855 "aliases": [ 00:31:48.855 "fef78fe4-af35-4999-86e0-9043c3bfb87e" 00:31:48.855 ], 00:31:48.855 "product_name": "Malloc disk", 00:31:48.855 "block_size": 512, 00:31:48.855 "num_blocks": 65536, 00:31:48.855 "uuid": "fef78fe4-af35-4999-86e0-9043c3bfb87e", 00:31:48.855 "assigned_rate_limits": { 00:31:48.855 "rw_ios_per_sec": 0, 00:31:48.855 "rw_mbytes_per_sec": 0, 00:31:48.855 "r_mbytes_per_sec": 0, 00:31:48.855 "w_mbytes_per_sec": 0 00:31:48.855 }, 00:31:48.855 "claimed": true, 00:31:48.855 "claim_type": "exclusive_write", 00:31:48.855 "zoned": false, 00:31:48.855 "supported_io_types": { 00:31:48.855 "read": true, 00:31:48.855 "write": true, 00:31:48.855 "unmap": true, 00:31:48.855 "flush": true, 00:31:48.855 "reset": true, 00:31:48.855 "nvme_admin": false, 00:31:48.855 "nvme_io": false, 00:31:48.855 "nvme_io_md": false, 00:31:48.855 "write_zeroes": true, 00:31:48.855 "zcopy": true, 00:31:48.855 "get_zone_info": false, 00:31:48.855 "zone_management": false, 00:31:48.855 "zone_append": false, 00:31:48.855 "compare": false, 00:31:48.855 "compare_and_write": false, 00:31:48.855 "abort": true, 00:31:48.855 "seek_hole": false, 00:31:48.855 "seek_data": false, 00:31:48.855 "copy": true, 00:31:48.855 "nvme_iov_md": false 00:31:48.855 }, 00:31:48.855 "memory_domains": [ 00:31:48.855 { 00:31:48.855 "dma_device_id": "system", 00:31:48.855 "dma_device_type": 1 00:31:48.855 }, 00:31:48.855 { 00:31:48.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:48.855 "dma_device_type": 2 00:31:48.855 } 00:31:48.855 ], 00:31:48.855 "driver_specific": {} 00:31:48.855 } 00:31:48.855 ] 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:48.855 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:48.856 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:48.856 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:48.856 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:48.856 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.856 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:49.113 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:49.113 "name": "Existed_Raid", 00:31:49.113 "uuid": "2dacc7ce-35c2-49d4-81b6-98589bd9e086", 00:31:49.113 "strip_size_kb": 64, 00:31:49.113 "state": "online", 00:31:49.113 "raid_level": "raid5f", 00:31:49.113 "superblock": true, 00:31:49.113 "num_base_bdevs": 3, 00:31:49.113 "num_base_bdevs_discovered": 3, 00:31:49.113 "num_base_bdevs_operational": 3, 00:31:49.113 "base_bdevs_list": [ 00:31:49.113 { 00:31:49.113 "name": "BaseBdev1", 00:31:49.113 "uuid": "171f1849-c30d-4571-a754-de3c809a1db1", 00:31:49.113 "is_configured": true, 00:31:49.113 "data_offset": 2048, 00:31:49.113 "data_size": 63488 00:31:49.113 }, 00:31:49.113 { 00:31:49.113 "name": "BaseBdev2", 00:31:49.113 "uuid": "14234821-b893-439a-a833-d16e504abc3e", 00:31:49.113 "is_configured": true, 00:31:49.113 "data_offset": 2048, 00:31:49.113 "data_size": 63488 00:31:49.113 }, 00:31:49.113 { 00:31:49.113 "name": "BaseBdev3", 00:31:49.113 "uuid": "fef78fe4-af35-4999-86e0-9043c3bfb87e", 00:31:49.113 "is_configured": true, 00:31:49.113 "data_offset": 2048, 00:31:49.113 "data_size": 63488 00:31:49.113 } 00:31:49.113 ] 00:31:49.113 }' 00:31:49.113 11:43:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:49.113 11:43:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.679 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:49.679 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:49.679 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:49.679 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:49.679 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:49.679 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:31:49.679 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:49.679 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:49.938 [2024-07-13 11:43:24.523440] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:49.938 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:49.938 "name": "Existed_Raid", 00:31:49.938 "aliases": [ 00:31:49.938 "2dacc7ce-35c2-49d4-81b6-98589bd9e086" 00:31:49.938 ], 00:31:49.938 "product_name": "Raid Volume", 00:31:49.938 "block_size": 512, 00:31:49.938 "num_blocks": 126976, 00:31:49.938 "uuid": "2dacc7ce-35c2-49d4-81b6-98589bd9e086", 00:31:49.938 "assigned_rate_limits": { 00:31:49.938 "rw_ios_per_sec": 0, 00:31:49.938 "rw_mbytes_per_sec": 0, 00:31:49.938 "r_mbytes_per_sec": 0, 00:31:49.938 "w_mbytes_per_sec": 0 00:31:49.938 }, 00:31:49.938 "claimed": false, 00:31:49.938 "zoned": false, 00:31:49.938 "supported_io_types": { 00:31:49.938 "read": true, 00:31:49.938 "write": true, 00:31:49.938 "unmap": false, 00:31:49.938 "flush": false, 00:31:49.938 "reset": true, 00:31:49.938 "nvme_admin": false, 00:31:49.938 "nvme_io": false, 00:31:49.938 "nvme_io_md": false, 00:31:49.938 "write_zeroes": true, 00:31:49.938 "zcopy": false, 00:31:49.938 "get_zone_info": false, 00:31:49.938 "zone_management": false, 00:31:49.938 "zone_append": false, 00:31:49.938 "compare": false, 00:31:49.938 "compare_and_write": false, 00:31:49.938 "abort": false, 00:31:49.938 "seek_hole": false, 00:31:49.938 "seek_data": false, 00:31:49.938 "copy": false, 00:31:49.938 "nvme_iov_md": false 00:31:49.938 }, 00:31:49.938 "driver_specific": { 00:31:49.938 "raid": { 00:31:49.938 "uuid": "2dacc7ce-35c2-49d4-81b6-98589bd9e086", 00:31:49.938 "strip_size_kb": 64, 00:31:49.938 "state": "online", 00:31:49.938 "raid_level": "raid5f", 00:31:49.938 "superblock": true, 00:31:49.938 "num_base_bdevs": 3, 00:31:49.938 "num_base_bdevs_discovered": 3, 00:31:49.938 "num_base_bdevs_operational": 3, 00:31:49.938 "base_bdevs_list": [ 00:31:49.938 { 00:31:49.938 "name": "BaseBdev1", 00:31:49.938 "uuid": "171f1849-c30d-4571-a754-de3c809a1db1", 00:31:49.938 "is_configured": true, 00:31:49.938 "data_offset": 2048, 00:31:49.938 "data_size": 63488 00:31:49.938 }, 00:31:49.938 { 00:31:49.938 "name": "BaseBdev2", 00:31:49.938 "uuid": "14234821-b893-439a-a833-d16e504abc3e", 00:31:49.938 "is_configured": true, 00:31:49.938 "data_offset": 2048, 00:31:49.938 "data_size": 63488 00:31:49.938 }, 00:31:49.938 { 00:31:49.938 "name": "BaseBdev3", 00:31:49.938 "uuid": "fef78fe4-af35-4999-86e0-9043c3bfb87e", 00:31:49.938 "is_configured": true, 00:31:49.938 "data_offset": 2048, 00:31:49.938 "data_size": 63488 00:31:49.938 } 00:31:49.938 ] 00:31:49.938 } 00:31:49.938 } 00:31:49.938 }' 00:31:49.938 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:49.938 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:49.938 BaseBdev2 00:31:49.938 BaseBdev3' 00:31:49.938 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:49.938 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:49.938 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:50.197 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:50.197 "name": "BaseBdev1", 00:31:50.197 "aliases": [ 00:31:50.197 "171f1849-c30d-4571-a754-de3c809a1db1" 00:31:50.197 ], 00:31:50.197 "product_name": "Malloc disk", 00:31:50.197 "block_size": 512, 00:31:50.197 "num_blocks": 65536, 00:31:50.197 "uuid": "171f1849-c30d-4571-a754-de3c809a1db1", 00:31:50.197 "assigned_rate_limits": { 00:31:50.197 "rw_ios_per_sec": 0, 00:31:50.197 "rw_mbytes_per_sec": 0, 00:31:50.197 "r_mbytes_per_sec": 0, 00:31:50.197 "w_mbytes_per_sec": 0 00:31:50.197 }, 00:31:50.197 "claimed": true, 00:31:50.197 "claim_type": "exclusive_write", 00:31:50.197 "zoned": false, 00:31:50.197 "supported_io_types": { 00:31:50.197 "read": true, 00:31:50.197 "write": true, 00:31:50.197 "unmap": true, 00:31:50.197 "flush": true, 00:31:50.197 "reset": true, 00:31:50.197 "nvme_admin": false, 00:31:50.197 "nvme_io": false, 00:31:50.197 "nvme_io_md": false, 00:31:50.197 "write_zeroes": true, 00:31:50.197 "zcopy": true, 00:31:50.197 "get_zone_info": false, 00:31:50.197 "zone_management": false, 00:31:50.197 "zone_append": false, 00:31:50.197 "compare": false, 00:31:50.197 "compare_and_write": false, 00:31:50.197 "abort": true, 00:31:50.197 "seek_hole": false, 00:31:50.197 "seek_data": false, 00:31:50.197 "copy": true, 00:31:50.197 "nvme_iov_md": false 00:31:50.197 }, 00:31:50.197 "memory_domains": [ 00:31:50.197 { 00:31:50.197 "dma_device_id": "system", 00:31:50.197 "dma_device_type": 1 00:31:50.197 }, 00:31:50.197 { 00:31:50.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:50.197 "dma_device_type": 2 00:31:50.197 } 00:31:50.197 ], 00:31:50.197 "driver_specific": {} 00:31:50.197 }' 00:31:50.197 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:50.197 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:50.197 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:50.197 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:50.197 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:50.456 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:50.456 11:43:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:50.456 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:50.456 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:50.456 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:50.456 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:50.457 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:50.457 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:50.457 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:50.457 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:50.715 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:50.715 "name": "BaseBdev2", 00:31:50.715 "aliases": [ 00:31:50.715 "14234821-b893-439a-a833-d16e504abc3e" 00:31:50.715 ], 00:31:50.715 "product_name": "Malloc disk", 00:31:50.715 "block_size": 512, 00:31:50.715 "num_blocks": 65536, 00:31:50.715 "uuid": "14234821-b893-439a-a833-d16e504abc3e", 00:31:50.715 "assigned_rate_limits": { 00:31:50.715 "rw_ios_per_sec": 0, 00:31:50.715 "rw_mbytes_per_sec": 0, 00:31:50.715 "r_mbytes_per_sec": 0, 00:31:50.715 "w_mbytes_per_sec": 0 00:31:50.715 }, 00:31:50.715 "claimed": true, 00:31:50.715 "claim_type": "exclusive_write", 00:31:50.715 "zoned": false, 00:31:50.715 "supported_io_types": { 00:31:50.715 "read": true, 00:31:50.715 "write": true, 00:31:50.715 "unmap": true, 00:31:50.715 "flush": true, 00:31:50.715 "reset": true, 00:31:50.715 "nvme_admin": false, 00:31:50.715 "nvme_io": false, 00:31:50.715 "nvme_io_md": false, 00:31:50.715 "write_zeroes": true, 00:31:50.715 "zcopy": true, 00:31:50.715 "get_zone_info": false, 00:31:50.715 "zone_management": false, 00:31:50.715 "zone_append": false, 00:31:50.715 "compare": false, 00:31:50.715 "compare_and_write": false, 00:31:50.715 "abort": true, 00:31:50.715 "seek_hole": false, 00:31:50.715 "seek_data": false, 00:31:50.715 "copy": true, 00:31:50.715 "nvme_iov_md": false 00:31:50.715 }, 00:31:50.715 "memory_domains": [ 00:31:50.715 { 00:31:50.715 "dma_device_id": "system", 00:31:50.715 "dma_device_type": 1 00:31:50.715 }, 00:31:50.715 { 00:31:50.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:50.715 "dma_device_type": 2 00:31:50.715 } 00:31:50.715 ], 00:31:50.715 "driver_specific": {} 00:31:50.715 }' 00:31:50.715 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:50.974 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:50.974 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:50.974 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:50.974 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:50.974 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:50.974 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:50.974 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:50.974 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:50.974 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:51.232 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:51.232 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:51.232 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:51.232 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:51.232 11:43:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:51.490 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:51.490 "name": "BaseBdev3", 00:31:51.490 "aliases": [ 00:31:51.490 "fef78fe4-af35-4999-86e0-9043c3bfb87e" 00:31:51.490 ], 00:31:51.490 "product_name": "Malloc disk", 00:31:51.490 "block_size": 512, 00:31:51.491 "num_blocks": 65536, 00:31:51.491 "uuid": "fef78fe4-af35-4999-86e0-9043c3bfb87e", 00:31:51.491 "assigned_rate_limits": { 00:31:51.491 "rw_ios_per_sec": 0, 00:31:51.491 "rw_mbytes_per_sec": 0, 00:31:51.491 "r_mbytes_per_sec": 0, 00:31:51.491 "w_mbytes_per_sec": 0 00:31:51.491 }, 00:31:51.491 "claimed": true, 00:31:51.491 "claim_type": "exclusive_write", 00:31:51.491 "zoned": false, 00:31:51.491 "supported_io_types": { 00:31:51.491 "read": true, 00:31:51.491 "write": true, 00:31:51.491 "unmap": true, 00:31:51.491 "flush": true, 00:31:51.491 "reset": true, 00:31:51.491 "nvme_admin": false, 00:31:51.491 "nvme_io": false, 00:31:51.491 "nvme_io_md": false, 00:31:51.491 "write_zeroes": true, 00:31:51.491 "zcopy": true, 00:31:51.491 "get_zone_info": false, 00:31:51.491 "zone_management": false, 00:31:51.491 "zone_append": false, 00:31:51.491 "compare": false, 00:31:51.491 "compare_and_write": false, 00:31:51.491 "abort": true, 00:31:51.491 "seek_hole": false, 00:31:51.491 "seek_data": false, 00:31:51.491 "copy": true, 00:31:51.491 "nvme_iov_md": false 00:31:51.491 }, 00:31:51.491 "memory_domains": [ 00:31:51.491 { 00:31:51.491 "dma_device_id": "system", 00:31:51.491 "dma_device_type": 1 00:31:51.491 }, 00:31:51.491 { 00:31:51.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:51.491 "dma_device_type": 2 00:31:51.491 } 00:31:51.491 ], 00:31:51.491 "driver_specific": {} 00:31:51.491 }' 00:31:51.491 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:51.491 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:51.491 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:51.491 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:51.491 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:51.749 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:51.749 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:51.749 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:51.749 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:51.749 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:51.749 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:52.009 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:52.009 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:52.009 [2024-07-13 11:43:26.755811] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.268 11:43:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:52.527 11:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:52.527 "name": "Existed_Raid", 00:31:52.527 "uuid": "2dacc7ce-35c2-49d4-81b6-98589bd9e086", 00:31:52.527 "strip_size_kb": 64, 00:31:52.527 "state": "online", 00:31:52.527 "raid_level": "raid5f", 00:31:52.527 "superblock": true, 00:31:52.527 "num_base_bdevs": 3, 00:31:52.527 "num_base_bdevs_discovered": 2, 00:31:52.527 "num_base_bdevs_operational": 2, 00:31:52.527 "base_bdevs_list": [ 00:31:52.527 { 00:31:52.527 "name": null, 00:31:52.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:52.527 "is_configured": false, 00:31:52.527 "data_offset": 2048, 00:31:52.527 "data_size": 63488 00:31:52.527 }, 00:31:52.527 { 00:31:52.527 "name": "BaseBdev2", 00:31:52.527 "uuid": "14234821-b893-439a-a833-d16e504abc3e", 00:31:52.527 "is_configured": true, 00:31:52.527 "data_offset": 2048, 00:31:52.527 "data_size": 63488 00:31:52.527 }, 00:31:52.527 { 00:31:52.527 "name": "BaseBdev3", 00:31:52.527 "uuid": "fef78fe4-af35-4999-86e0-9043c3bfb87e", 00:31:52.527 "is_configured": true, 00:31:52.527 "data_offset": 2048, 00:31:52.527 "data_size": 63488 00:31:52.527 } 00:31:52.527 ] 00:31:52.527 }' 00:31:52.527 11:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:52.527 11:43:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.094 11:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:53.094 11:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:53.094 11:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.094 11:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:53.353 11:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:53.353 11:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:53.353 11:43:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:53.353 [2024-07-13 11:43:28.081942] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:53.353 [2024-07-13 11:43:28.082229] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:53.612 [2024-07-13 11:43:28.145546] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:53.612 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:53.612 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:53.612 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.612 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:53.612 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:53.612 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:53.612 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:31:53.871 [2024-07-13 11:43:28.517645] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:53.871 [2024-07-13 11:43:28.517837] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:31:53.871 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:53.871 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:53.871 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.871 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:54.129 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:54.129 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:54.129 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:31:54.129 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:31:54.129 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:54.129 11:43:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:31:54.388 BaseBdev2 00:31:54.388 11:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:31:54.388 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:54.388 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:54.388 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:31:54.388 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:54.388 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:54.388 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:54.646 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:54.905 [ 00:31:54.905 { 00:31:54.905 "name": "BaseBdev2", 00:31:54.905 "aliases": [ 00:31:54.905 "9ad61ea4-a581-4399-a8fc-519f9b06202f" 00:31:54.905 ], 00:31:54.905 "product_name": "Malloc disk", 00:31:54.905 "block_size": 512, 00:31:54.905 "num_blocks": 65536, 00:31:54.905 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:31:54.905 "assigned_rate_limits": { 00:31:54.905 "rw_ios_per_sec": 0, 00:31:54.905 "rw_mbytes_per_sec": 0, 00:31:54.905 "r_mbytes_per_sec": 0, 00:31:54.905 "w_mbytes_per_sec": 0 00:31:54.905 }, 00:31:54.905 "claimed": false, 00:31:54.905 "zoned": false, 00:31:54.905 "supported_io_types": { 00:31:54.905 "read": true, 00:31:54.905 "write": true, 00:31:54.905 "unmap": true, 00:31:54.905 "flush": true, 00:31:54.905 "reset": true, 00:31:54.905 "nvme_admin": false, 00:31:54.905 "nvme_io": false, 00:31:54.905 "nvme_io_md": false, 00:31:54.905 "write_zeroes": true, 00:31:54.905 "zcopy": true, 00:31:54.905 "get_zone_info": false, 00:31:54.905 "zone_management": false, 00:31:54.905 "zone_append": false, 00:31:54.905 "compare": false, 00:31:54.905 "compare_and_write": false, 00:31:54.905 "abort": true, 00:31:54.905 "seek_hole": false, 00:31:54.905 "seek_data": false, 00:31:54.905 "copy": true, 00:31:54.905 "nvme_iov_md": false 00:31:54.905 }, 00:31:54.905 "memory_domains": [ 00:31:54.905 { 00:31:54.905 "dma_device_id": "system", 00:31:54.905 "dma_device_type": 1 00:31:54.905 }, 00:31:54.905 { 00:31:54.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.905 "dma_device_type": 2 00:31:54.905 } 00:31:54.905 ], 00:31:54.905 "driver_specific": {} 00:31:54.905 } 00:31:54.905 ] 00:31:54.905 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:31:54.905 11:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:54.905 11:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:54.905 11:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:31:55.164 BaseBdev3 00:31:55.164 11:43:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:31:55.164 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:31:55.164 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:55.164 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:31:55.164 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:55.164 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:55.164 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:55.164 11:43:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:31:55.422 [ 00:31:55.422 { 00:31:55.422 "name": "BaseBdev3", 00:31:55.422 "aliases": [ 00:31:55.422 "b03a8d9f-6973-4177-abdd-63784a1976ca" 00:31:55.422 ], 00:31:55.422 "product_name": "Malloc disk", 00:31:55.422 "block_size": 512, 00:31:55.422 "num_blocks": 65536, 00:31:55.422 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:31:55.422 "assigned_rate_limits": { 00:31:55.422 "rw_ios_per_sec": 0, 00:31:55.422 "rw_mbytes_per_sec": 0, 00:31:55.422 "r_mbytes_per_sec": 0, 00:31:55.422 "w_mbytes_per_sec": 0 00:31:55.422 }, 00:31:55.422 "claimed": false, 00:31:55.422 "zoned": false, 00:31:55.422 "supported_io_types": { 00:31:55.422 "read": true, 00:31:55.422 "write": true, 00:31:55.422 "unmap": true, 00:31:55.422 "flush": true, 00:31:55.422 "reset": true, 00:31:55.422 "nvme_admin": false, 00:31:55.422 "nvme_io": false, 00:31:55.422 "nvme_io_md": false, 00:31:55.422 "write_zeroes": true, 00:31:55.422 "zcopy": true, 00:31:55.422 "get_zone_info": false, 00:31:55.422 "zone_management": false, 00:31:55.422 "zone_append": false, 00:31:55.422 "compare": false, 00:31:55.422 "compare_and_write": false, 00:31:55.422 "abort": true, 00:31:55.422 "seek_hole": false, 00:31:55.422 "seek_data": false, 00:31:55.422 "copy": true, 00:31:55.422 "nvme_iov_md": false 00:31:55.422 }, 00:31:55.422 "memory_domains": [ 00:31:55.422 { 00:31:55.422 "dma_device_id": "system", 00:31:55.422 "dma_device_type": 1 00:31:55.422 }, 00:31:55.422 { 00:31:55.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:55.422 "dma_device_type": 2 00:31:55.422 } 00:31:55.422 ], 00:31:55.422 "driver_specific": {} 00:31:55.422 } 00:31:55.422 ] 00:31:55.422 11:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:31:55.422 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:31:55.422 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:31:55.422 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:31:55.680 [2024-07-13 11:43:30.267617] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:55.680 [2024-07-13 11:43:30.267874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:55.680 [2024-07-13 11:43:30.268174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:55.680 [2024-07-13 11:43:30.270124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:55.680 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.681 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:55.939 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:55.939 "name": "Existed_Raid", 00:31:55.939 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:31:55.939 "strip_size_kb": 64, 00:31:55.939 "state": "configuring", 00:31:55.939 "raid_level": "raid5f", 00:31:55.939 "superblock": true, 00:31:55.939 "num_base_bdevs": 3, 00:31:55.939 "num_base_bdevs_discovered": 2, 00:31:55.939 "num_base_bdevs_operational": 3, 00:31:55.939 "base_bdevs_list": [ 00:31:55.939 { 00:31:55.939 "name": "BaseBdev1", 00:31:55.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.939 "is_configured": false, 00:31:55.939 "data_offset": 0, 00:31:55.939 "data_size": 0 00:31:55.939 }, 00:31:55.939 { 00:31:55.939 "name": "BaseBdev2", 00:31:55.939 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:31:55.939 "is_configured": true, 00:31:55.939 "data_offset": 2048, 00:31:55.939 "data_size": 63488 00:31:55.939 }, 00:31:55.939 { 00:31:55.939 "name": "BaseBdev3", 00:31:55.939 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:31:55.939 "is_configured": true, 00:31:55.939 "data_offset": 2048, 00:31:55.939 "data_size": 63488 00:31:55.939 } 00:31:55.939 ] 00:31:55.939 }' 00:31:55.939 11:43:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:55.939 11:43:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.504 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:56.763 [2024-07-13 11:43:31.364047] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.763 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:57.022 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:57.022 "name": "Existed_Raid", 00:31:57.022 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:31:57.022 "strip_size_kb": 64, 00:31:57.022 "state": "configuring", 00:31:57.022 "raid_level": "raid5f", 00:31:57.022 "superblock": true, 00:31:57.022 "num_base_bdevs": 3, 00:31:57.022 "num_base_bdevs_discovered": 1, 00:31:57.022 "num_base_bdevs_operational": 3, 00:31:57.022 "base_bdevs_list": [ 00:31:57.022 { 00:31:57.022 "name": "BaseBdev1", 00:31:57.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.022 "is_configured": false, 00:31:57.022 "data_offset": 0, 00:31:57.022 "data_size": 0 00:31:57.022 }, 00:31:57.022 { 00:31:57.022 "name": null, 00:31:57.022 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:31:57.022 "is_configured": false, 00:31:57.022 "data_offset": 2048, 00:31:57.022 "data_size": 63488 00:31:57.022 }, 00:31:57.022 { 00:31:57.022 "name": "BaseBdev3", 00:31:57.022 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:31:57.022 "is_configured": true, 00:31:57.022 "data_offset": 2048, 00:31:57.022 "data_size": 63488 00:31:57.022 } 00:31:57.022 ] 00:31:57.022 }' 00:31:57.022 11:43:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:57.022 11:43:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:57.588 11:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.588 11:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:31:57.847 11:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:31:57.847 11:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:31:58.104 [2024-07-13 11:43:32.637859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:58.104 BaseBdev1 00:31:58.104 11:43:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:31:58.104 11:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:58.104 11:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:58.104 11:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:31:58.104 11:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:58.104 11:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:58.104 11:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:58.104 11:43:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:58.363 [ 00:31:58.363 { 00:31:58.363 "name": "BaseBdev1", 00:31:58.363 "aliases": [ 00:31:58.363 "0135279b-1e02-40f6-9e9c-2ee9c4e67929" 00:31:58.363 ], 00:31:58.363 "product_name": "Malloc disk", 00:31:58.363 "block_size": 512, 00:31:58.363 "num_blocks": 65536, 00:31:58.363 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:31:58.363 "assigned_rate_limits": { 00:31:58.363 "rw_ios_per_sec": 0, 00:31:58.363 "rw_mbytes_per_sec": 0, 00:31:58.363 "r_mbytes_per_sec": 0, 00:31:58.363 "w_mbytes_per_sec": 0 00:31:58.363 }, 00:31:58.363 "claimed": true, 00:31:58.363 "claim_type": "exclusive_write", 00:31:58.363 "zoned": false, 00:31:58.363 "supported_io_types": { 00:31:58.363 "read": true, 00:31:58.363 "write": true, 00:31:58.363 "unmap": true, 00:31:58.363 "flush": true, 00:31:58.363 "reset": true, 00:31:58.363 "nvme_admin": false, 00:31:58.363 "nvme_io": false, 00:31:58.363 "nvme_io_md": false, 00:31:58.363 "write_zeroes": true, 00:31:58.363 "zcopy": true, 00:31:58.363 "get_zone_info": false, 00:31:58.363 "zone_management": false, 00:31:58.363 "zone_append": false, 00:31:58.363 "compare": false, 00:31:58.363 "compare_and_write": false, 00:31:58.363 "abort": true, 00:31:58.363 "seek_hole": false, 00:31:58.363 "seek_data": false, 00:31:58.363 "copy": true, 00:31:58.363 "nvme_iov_md": false 00:31:58.363 }, 00:31:58.363 "memory_domains": [ 00:31:58.363 { 00:31:58.363 "dma_device_id": "system", 00:31:58.363 "dma_device_type": 1 00:31:58.363 }, 00:31:58.363 { 00:31:58.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:58.363 "dma_device_type": 2 00:31:58.363 } 00:31:58.363 ], 00:31:58.363 "driver_specific": {} 00:31:58.363 } 00:31:58.363 ] 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.363 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:58.622 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.622 "name": "Existed_Raid", 00:31:58.622 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:31:58.622 "strip_size_kb": 64, 00:31:58.622 "state": "configuring", 00:31:58.622 "raid_level": "raid5f", 00:31:58.622 "superblock": true, 00:31:58.622 "num_base_bdevs": 3, 00:31:58.622 "num_base_bdevs_discovered": 2, 00:31:58.622 "num_base_bdevs_operational": 3, 00:31:58.622 "base_bdevs_list": [ 00:31:58.622 { 00:31:58.622 "name": "BaseBdev1", 00:31:58.622 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:31:58.622 "is_configured": true, 00:31:58.622 "data_offset": 2048, 00:31:58.622 "data_size": 63488 00:31:58.622 }, 00:31:58.622 { 00:31:58.622 "name": null, 00:31:58.622 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:31:58.622 "is_configured": false, 00:31:58.622 "data_offset": 2048, 00:31:58.622 "data_size": 63488 00:31:58.622 }, 00:31:58.622 { 00:31:58.622 "name": "BaseBdev3", 00:31:58.622 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:31:58.622 "is_configured": true, 00:31:58.622 "data_offset": 2048, 00:31:58.622 "data_size": 63488 00:31:58.622 } 00:31:58.622 ] 00:31:58.622 }' 00:31:58.622 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.622 11:43:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:59.186 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:31:59.186 11:43:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.444 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:31:59.444 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:31:59.700 [2024-07-13 11:43:34.263298] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:31:59.700 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:31:59.700 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:59.700 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:59.700 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:59.700 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:59.700 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:59.701 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.701 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.701 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.701 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.701 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.701 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:59.960 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:59.960 "name": "Existed_Raid", 00:31:59.960 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:31:59.960 "strip_size_kb": 64, 00:31:59.960 "state": "configuring", 00:31:59.960 "raid_level": "raid5f", 00:31:59.960 "superblock": true, 00:31:59.960 "num_base_bdevs": 3, 00:31:59.960 "num_base_bdevs_discovered": 1, 00:31:59.960 "num_base_bdevs_operational": 3, 00:31:59.960 "base_bdevs_list": [ 00:31:59.960 { 00:31:59.960 "name": "BaseBdev1", 00:31:59.960 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:31:59.960 "is_configured": true, 00:31:59.960 "data_offset": 2048, 00:31:59.960 "data_size": 63488 00:31:59.960 }, 00:31:59.960 { 00:31:59.960 "name": null, 00:31:59.960 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:31:59.960 "is_configured": false, 00:31:59.960 "data_offset": 2048, 00:31:59.960 "data_size": 63488 00:31:59.960 }, 00:31:59.960 { 00:31:59.960 "name": null, 00:31:59.960 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:31:59.960 "is_configured": false, 00:31:59.960 "data_offset": 2048, 00:31:59.961 "data_size": 63488 00:31:59.961 } 00:31:59.961 ] 00:31:59.961 }' 00:31:59.961 11:43:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:59.961 11:43:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:00.524 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.524 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:00.783 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:32:00.783 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:00.783 [2024-07-13 11:43:35.531598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:01.039 "name": "Existed_Raid", 00:32:01.039 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:32:01.039 "strip_size_kb": 64, 00:32:01.039 "state": "configuring", 00:32:01.039 "raid_level": "raid5f", 00:32:01.039 "superblock": true, 00:32:01.039 "num_base_bdevs": 3, 00:32:01.039 "num_base_bdevs_discovered": 2, 00:32:01.039 "num_base_bdevs_operational": 3, 00:32:01.039 "base_bdevs_list": [ 00:32:01.039 { 00:32:01.039 "name": "BaseBdev1", 00:32:01.039 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:32:01.039 "is_configured": true, 00:32:01.039 "data_offset": 2048, 00:32:01.039 "data_size": 63488 00:32:01.039 }, 00:32:01.039 { 00:32:01.039 "name": null, 00:32:01.039 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:32:01.039 "is_configured": false, 00:32:01.039 "data_offset": 2048, 00:32:01.039 "data_size": 63488 00:32:01.039 }, 00:32:01.039 { 00:32:01.039 "name": "BaseBdev3", 00:32:01.039 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:32:01.039 "is_configured": true, 00:32:01.039 "data_offset": 2048, 00:32:01.039 "data_size": 63488 00:32:01.039 } 00:32:01.039 ] 00:32:01.039 }' 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:01.039 11:43:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:02.001 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.002 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:02.002 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:32:02.002 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:02.282 [2024-07-13 11:43:36.839903] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.282 11:43:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:02.542 11:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:02.542 "name": "Existed_Raid", 00:32:02.542 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:32:02.542 "strip_size_kb": 64, 00:32:02.542 "state": "configuring", 00:32:02.542 "raid_level": "raid5f", 00:32:02.542 "superblock": true, 00:32:02.542 "num_base_bdevs": 3, 00:32:02.542 "num_base_bdevs_discovered": 1, 00:32:02.542 "num_base_bdevs_operational": 3, 00:32:02.542 "base_bdevs_list": [ 00:32:02.542 { 00:32:02.542 "name": null, 00:32:02.542 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:32:02.542 "is_configured": false, 00:32:02.542 "data_offset": 2048, 00:32:02.542 "data_size": 63488 00:32:02.542 }, 00:32:02.542 { 00:32:02.542 "name": null, 00:32:02.542 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:32:02.542 "is_configured": false, 00:32:02.542 "data_offset": 2048, 00:32:02.542 "data_size": 63488 00:32:02.542 }, 00:32:02.542 { 00:32:02.542 "name": "BaseBdev3", 00:32:02.542 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:32:02.542 "is_configured": true, 00:32:02.542 "data_offset": 2048, 00:32:02.542 "data_size": 63488 00:32:02.542 } 00:32:02.542 ] 00:32:02.542 }' 00:32:02.542 11:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:02.542 11:43:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:03.109 11:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.109 11:43:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:03.367 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:32:03.367 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:03.625 [2024-07-13 11:43:38.235222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.625 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:03.884 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:03.884 "name": "Existed_Raid", 00:32:03.884 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:32:03.884 "strip_size_kb": 64, 00:32:03.884 "state": "configuring", 00:32:03.884 "raid_level": "raid5f", 00:32:03.884 "superblock": true, 00:32:03.884 "num_base_bdevs": 3, 00:32:03.884 "num_base_bdevs_discovered": 2, 00:32:03.884 "num_base_bdevs_operational": 3, 00:32:03.884 "base_bdevs_list": [ 00:32:03.884 { 00:32:03.884 "name": null, 00:32:03.884 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:32:03.884 "is_configured": false, 00:32:03.884 "data_offset": 2048, 00:32:03.884 "data_size": 63488 00:32:03.884 }, 00:32:03.884 { 00:32:03.884 "name": "BaseBdev2", 00:32:03.884 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:32:03.884 "is_configured": true, 00:32:03.884 "data_offset": 2048, 00:32:03.884 "data_size": 63488 00:32:03.884 }, 00:32:03.884 { 00:32:03.884 "name": "BaseBdev3", 00:32:03.884 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:32:03.884 "is_configured": true, 00:32:03.884 "data_offset": 2048, 00:32:03.884 "data_size": 63488 00:32:03.884 } 00:32:03.884 ] 00:32:03.884 }' 00:32:03.884 11:43:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:03.884 11:43:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:04.451 11:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.451 11:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:04.710 11:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:32:04.710 11:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.710 11:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:04.968 11:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 0135279b-1e02-40f6-9e9c-2ee9c4e67929 00:32:04.968 [2024-07-13 11:43:39.720696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:04.968 [2024-07-13 11:43:39.721083] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:32:04.968 [2024-07-13 11:43:39.721212] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:04.968 NewBaseBdev 00:32:04.968 [2024-07-13 11:43:39.721349] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:32:05.228 [2024-07-13 11:43:39.725424] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:32:05.228 [2024-07-13 11:43:39.725583] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008d80 00:32:05.228 [2024-07-13 11:43:39.725840] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.228 11:43:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:32:05.228 11:43:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:32:05.228 11:43:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:05.228 11:43:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:32:05.228 11:43:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:05.228 11:43:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:05.228 11:43:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:05.228 11:43:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:05.486 [ 00:32:05.486 { 00:32:05.486 "name": "NewBaseBdev", 00:32:05.486 "aliases": [ 00:32:05.486 "0135279b-1e02-40f6-9e9c-2ee9c4e67929" 00:32:05.486 ], 00:32:05.486 "product_name": "Malloc disk", 00:32:05.486 "block_size": 512, 00:32:05.486 "num_blocks": 65536, 00:32:05.486 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:32:05.486 "assigned_rate_limits": { 00:32:05.486 "rw_ios_per_sec": 0, 00:32:05.486 "rw_mbytes_per_sec": 0, 00:32:05.486 "r_mbytes_per_sec": 0, 00:32:05.486 "w_mbytes_per_sec": 0 00:32:05.486 }, 00:32:05.486 "claimed": true, 00:32:05.486 "claim_type": "exclusive_write", 00:32:05.486 "zoned": false, 00:32:05.486 "supported_io_types": { 00:32:05.486 "read": true, 00:32:05.486 "write": true, 00:32:05.486 "unmap": true, 00:32:05.486 "flush": true, 00:32:05.486 "reset": true, 00:32:05.486 "nvme_admin": false, 00:32:05.486 "nvme_io": false, 00:32:05.486 "nvme_io_md": false, 00:32:05.486 "write_zeroes": true, 00:32:05.486 "zcopy": true, 00:32:05.486 "get_zone_info": false, 00:32:05.486 "zone_management": false, 00:32:05.486 "zone_append": false, 00:32:05.486 "compare": false, 00:32:05.486 "compare_and_write": false, 00:32:05.486 "abort": true, 00:32:05.486 "seek_hole": false, 00:32:05.486 "seek_data": false, 00:32:05.486 "copy": true, 00:32:05.486 "nvme_iov_md": false 00:32:05.486 }, 00:32:05.486 "memory_domains": [ 00:32:05.486 { 00:32:05.486 "dma_device_id": "system", 00:32:05.486 "dma_device_type": 1 00:32:05.486 }, 00:32:05.486 { 00:32:05.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:05.486 "dma_device_type": 2 00:32:05.486 } 00:32:05.486 ], 00:32:05.486 "driver_specific": {} 00:32:05.486 } 00:32:05.486 ] 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:05.486 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.744 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:05.744 "name": "Existed_Raid", 00:32:05.744 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:32:05.744 "strip_size_kb": 64, 00:32:05.744 "state": "online", 00:32:05.744 "raid_level": "raid5f", 00:32:05.744 "superblock": true, 00:32:05.744 "num_base_bdevs": 3, 00:32:05.744 "num_base_bdevs_discovered": 3, 00:32:05.744 "num_base_bdevs_operational": 3, 00:32:05.744 "base_bdevs_list": [ 00:32:05.744 { 00:32:05.744 "name": "NewBaseBdev", 00:32:05.744 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:32:05.744 "is_configured": true, 00:32:05.744 "data_offset": 2048, 00:32:05.744 "data_size": 63488 00:32:05.744 }, 00:32:05.744 { 00:32:05.744 "name": "BaseBdev2", 00:32:05.744 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:32:05.744 "is_configured": true, 00:32:05.744 "data_offset": 2048, 00:32:05.744 "data_size": 63488 00:32:05.744 }, 00:32:05.744 { 00:32:05.744 "name": "BaseBdev3", 00:32:05.744 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:32:05.744 "is_configured": true, 00:32:05.744 "data_offset": 2048, 00:32:05.744 "data_size": 63488 00:32:05.744 } 00:32:05.744 ] 00:32:05.744 }' 00:32:05.744 11:43:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:05.744 11:43:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.679 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:32:06.679 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:06.679 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:06.679 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:06.679 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:06.679 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:32:06.679 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:06.679 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:06.679 [2024-07-13 11:43:41.334910] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:06.679 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:06.679 "name": "Existed_Raid", 00:32:06.679 "aliases": [ 00:32:06.679 "3828b41d-437b-44b8-82ed-3c88cb5da502" 00:32:06.679 ], 00:32:06.679 "product_name": "Raid Volume", 00:32:06.679 "block_size": 512, 00:32:06.679 "num_blocks": 126976, 00:32:06.679 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:32:06.679 "assigned_rate_limits": { 00:32:06.679 "rw_ios_per_sec": 0, 00:32:06.679 "rw_mbytes_per_sec": 0, 00:32:06.679 "r_mbytes_per_sec": 0, 00:32:06.679 "w_mbytes_per_sec": 0 00:32:06.679 }, 00:32:06.679 "claimed": false, 00:32:06.679 "zoned": false, 00:32:06.679 "supported_io_types": { 00:32:06.679 "read": true, 00:32:06.679 "write": true, 00:32:06.679 "unmap": false, 00:32:06.679 "flush": false, 00:32:06.679 "reset": true, 00:32:06.679 "nvme_admin": false, 00:32:06.679 "nvme_io": false, 00:32:06.679 "nvme_io_md": false, 00:32:06.679 "write_zeroes": true, 00:32:06.679 "zcopy": false, 00:32:06.679 "get_zone_info": false, 00:32:06.679 "zone_management": false, 00:32:06.679 "zone_append": false, 00:32:06.679 "compare": false, 00:32:06.679 "compare_and_write": false, 00:32:06.679 "abort": false, 00:32:06.679 "seek_hole": false, 00:32:06.679 "seek_data": false, 00:32:06.679 "copy": false, 00:32:06.679 "nvme_iov_md": false 00:32:06.679 }, 00:32:06.679 "driver_specific": { 00:32:06.679 "raid": { 00:32:06.679 "uuid": "3828b41d-437b-44b8-82ed-3c88cb5da502", 00:32:06.679 "strip_size_kb": 64, 00:32:06.679 "state": "online", 00:32:06.679 "raid_level": "raid5f", 00:32:06.679 "superblock": true, 00:32:06.679 "num_base_bdevs": 3, 00:32:06.679 "num_base_bdevs_discovered": 3, 00:32:06.680 "num_base_bdevs_operational": 3, 00:32:06.680 "base_bdevs_list": [ 00:32:06.680 { 00:32:06.680 "name": "NewBaseBdev", 00:32:06.680 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:32:06.680 "is_configured": true, 00:32:06.680 "data_offset": 2048, 00:32:06.680 "data_size": 63488 00:32:06.680 }, 00:32:06.680 { 00:32:06.680 "name": "BaseBdev2", 00:32:06.680 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:32:06.680 "is_configured": true, 00:32:06.680 "data_offset": 2048, 00:32:06.680 "data_size": 63488 00:32:06.680 }, 00:32:06.680 { 00:32:06.680 "name": "BaseBdev3", 00:32:06.680 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:32:06.680 "is_configured": true, 00:32:06.680 "data_offset": 2048, 00:32:06.680 "data_size": 63488 00:32:06.680 } 00:32:06.680 ] 00:32:06.680 } 00:32:06.680 } 00:32:06.680 }' 00:32:06.680 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:06.680 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:32:06.680 BaseBdev2 00:32:06.680 BaseBdev3' 00:32:06.680 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:06.680 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:32:06.680 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:06.939 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:06.939 "name": "NewBaseBdev", 00:32:06.939 "aliases": [ 00:32:06.939 "0135279b-1e02-40f6-9e9c-2ee9c4e67929" 00:32:06.939 ], 00:32:06.939 "product_name": "Malloc disk", 00:32:06.939 "block_size": 512, 00:32:06.939 "num_blocks": 65536, 00:32:06.939 "uuid": "0135279b-1e02-40f6-9e9c-2ee9c4e67929", 00:32:06.939 "assigned_rate_limits": { 00:32:06.939 "rw_ios_per_sec": 0, 00:32:06.939 "rw_mbytes_per_sec": 0, 00:32:06.939 "r_mbytes_per_sec": 0, 00:32:06.939 "w_mbytes_per_sec": 0 00:32:06.939 }, 00:32:06.939 "claimed": true, 00:32:06.939 "claim_type": "exclusive_write", 00:32:06.939 "zoned": false, 00:32:06.939 "supported_io_types": { 00:32:06.939 "read": true, 00:32:06.939 "write": true, 00:32:06.939 "unmap": true, 00:32:06.939 "flush": true, 00:32:06.939 "reset": true, 00:32:06.939 "nvme_admin": false, 00:32:06.939 "nvme_io": false, 00:32:06.939 "nvme_io_md": false, 00:32:06.939 "write_zeroes": true, 00:32:06.939 "zcopy": true, 00:32:06.939 "get_zone_info": false, 00:32:06.939 "zone_management": false, 00:32:06.939 "zone_append": false, 00:32:06.939 "compare": false, 00:32:06.939 "compare_and_write": false, 00:32:06.939 "abort": true, 00:32:06.939 "seek_hole": false, 00:32:06.939 "seek_data": false, 00:32:06.939 "copy": true, 00:32:06.939 "nvme_iov_md": false 00:32:06.939 }, 00:32:06.939 "memory_domains": [ 00:32:06.939 { 00:32:06.939 "dma_device_id": "system", 00:32:06.939 "dma_device_type": 1 00:32:06.939 }, 00:32:06.939 { 00:32:06.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:06.939 "dma_device_type": 2 00:32:06.939 } 00:32:06.939 ], 00:32:06.939 "driver_specific": {} 00:32:06.939 }' 00:32:06.939 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:07.197 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:07.197 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:07.198 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:07.198 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:07.198 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:07.198 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:07.198 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:07.456 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:07.456 11:43:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:07.456 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:07.456 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:07.456 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:07.456 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:07.456 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:07.715 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:07.715 "name": "BaseBdev2", 00:32:07.715 "aliases": [ 00:32:07.715 "9ad61ea4-a581-4399-a8fc-519f9b06202f" 00:32:07.715 ], 00:32:07.715 "product_name": "Malloc disk", 00:32:07.715 "block_size": 512, 00:32:07.715 "num_blocks": 65536, 00:32:07.715 "uuid": "9ad61ea4-a581-4399-a8fc-519f9b06202f", 00:32:07.715 "assigned_rate_limits": { 00:32:07.715 "rw_ios_per_sec": 0, 00:32:07.715 "rw_mbytes_per_sec": 0, 00:32:07.715 "r_mbytes_per_sec": 0, 00:32:07.715 "w_mbytes_per_sec": 0 00:32:07.715 }, 00:32:07.715 "claimed": true, 00:32:07.715 "claim_type": "exclusive_write", 00:32:07.715 "zoned": false, 00:32:07.715 "supported_io_types": { 00:32:07.715 "read": true, 00:32:07.715 "write": true, 00:32:07.715 "unmap": true, 00:32:07.715 "flush": true, 00:32:07.715 "reset": true, 00:32:07.715 "nvme_admin": false, 00:32:07.715 "nvme_io": false, 00:32:07.715 "nvme_io_md": false, 00:32:07.715 "write_zeroes": true, 00:32:07.715 "zcopy": true, 00:32:07.715 "get_zone_info": false, 00:32:07.716 "zone_management": false, 00:32:07.716 "zone_append": false, 00:32:07.716 "compare": false, 00:32:07.716 "compare_and_write": false, 00:32:07.716 "abort": true, 00:32:07.716 "seek_hole": false, 00:32:07.716 "seek_data": false, 00:32:07.716 "copy": true, 00:32:07.716 "nvme_iov_md": false 00:32:07.716 }, 00:32:07.716 "memory_domains": [ 00:32:07.716 { 00:32:07.716 "dma_device_id": "system", 00:32:07.716 "dma_device_type": 1 00:32:07.716 }, 00:32:07.716 { 00:32:07.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:07.716 "dma_device_type": 2 00:32:07.716 } 00:32:07.716 ], 00:32:07.716 "driver_specific": {} 00:32:07.716 }' 00:32:07.716 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:07.716 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:07.716 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:07.716 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:07.716 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:07.974 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:08.240 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:08.240 "name": "BaseBdev3", 00:32:08.240 "aliases": [ 00:32:08.240 "b03a8d9f-6973-4177-abdd-63784a1976ca" 00:32:08.240 ], 00:32:08.240 "product_name": "Malloc disk", 00:32:08.240 "block_size": 512, 00:32:08.240 "num_blocks": 65536, 00:32:08.240 "uuid": "b03a8d9f-6973-4177-abdd-63784a1976ca", 00:32:08.240 "assigned_rate_limits": { 00:32:08.240 "rw_ios_per_sec": 0, 00:32:08.240 "rw_mbytes_per_sec": 0, 00:32:08.241 "r_mbytes_per_sec": 0, 00:32:08.241 "w_mbytes_per_sec": 0 00:32:08.241 }, 00:32:08.241 "claimed": true, 00:32:08.241 "claim_type": "exclusive_write", 00:32:08.241 "zoned": false, 00:32:08.241 "supported_io_types": { 00:32:08.241 "read": true, 00:32:08.241 "write": true, 00:32:08.241 "unmap": true, 00:32:08.241 "flush": true, 00:32:08.241 "reset": true, 00:32:08.241 "nvme_admin": false, 00:32:08.241 "nvme_io": false, 00:32:08.241 "nvme_io_md": false, 00:32:08.241 "write_zeroes": true, 00:32:08.241 "zcopy": true, 00:32:08.241 "get_zone_info": false, 00:32:08.241 "zone_management": false, 00:32:08.241 "zone_append": false, 00:32:08.241 "compare": false, 00:32:08.241 "compare_and_write": false, 00:32:08.241 "abort": true, 00:32:08.241 "seek_hole": false, 00:32:08.241 "seek_data": false, 00:32:08.241 "copy": true, 00:32:08.241 "nvme_iov_md": false 00:32:08.241 }, 00:32:08.241 "memory_domains": [ 00:32:08.241 { 00:32:08.241 "dma_device_id": "system", 00:32:08.241 "dma_device_type": 1 00:32:08.241 }, 00:32:08.242 { 00:32:08.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:08.242 "dma_device_type": 2 00:32:08.242 } 00:32:08.242 ], 00:32:08.242 "driver_specific": {} 00:32:08.242 }' 00:32:08.242 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:08.501 11:43:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:08.501 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:08.501 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:08.501 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:08.501 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:08.501 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:08.501 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:08.759 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:08.759 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:08.759 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:08.759 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:08.759 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:09.016 [2024-07-13 11:43:43.619404] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:09.016 [2024-07-13 11:43:43.619597] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:09.016 [2024-07-13 11:43:43.619796] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:09.016 [2024-07-13 11:43:43.620158] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:09.016 [2024-07-13 11:43:43.620300] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name Existed_Raid, state offline 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 151976 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 151976 ']' 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 151976 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 151976 00:32:09.016 killing process with pid 151976 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 151976' 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 151976 00:32:09.016 11:43:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 151976 00:32:09.016 [2024-07-13 11:43:43.652439] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:09.274 [2024-07-13 11:43:43.845190] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:10.208 ************************************ 00:32:10.208 END TEST raid5f_state_function_test_sb 00:32:10.208 ************************************ 00:32:10.208 11:43:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:32:10.208 00:32:10.208 real 0m28.903s 00:32:10.208 user 0m54.207s 00:32:10.208 sys 0m3.271s 00:32:10.208 11:43:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:10.208 11:43:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:10.208 11:43:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:10.208 11:43:44 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:32:10.208 11:43:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:32:10.208 11:43:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:10.208 11:43:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:10.208 ************************************ 00:32:10.208 START TEST raid5f_superblock_test 00:32:10.208 ************************************ 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 3 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=152981 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 152981 /var/tmp/spdk-raid.sock 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 152981 ']' 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:10.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:10.208 11:43:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:10.208 [2024-07-13 11:43:44.885383] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:10.208 [2024-07-13 11:43:44.885758] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152981 ] 00:32:10.468 [2024-07-13 11:43:45.050668] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.727 [2024-07-13 11:43:45.243891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.727 [2024-07-13 11:43:45.428740] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:11.295 11:43:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:32:11.553 malloc1 00:32:11.554 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:11.812 [2024-07-13 11:43:46.397469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:11.812 [2024-07-13 11:43:46.397752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.812 [2024-07-13 11:43:46.397935] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:32:11.812 [2024-07-13 11:43:46.398058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.812 [2024-07-13 11:43:46.400399] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.812 [2024-07-13 11:43:46.400565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:11.812 pt1 00:32:11.812 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:11.812 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:11.812 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:32:11.812 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:32:11.812 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:11.812 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:11.812 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:11.812 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:11.812 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:32:12.071 malloc2 00:32:12.071 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:12.330 [2024-07-13 11:43:46.842613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:12.330 [2024-07-13 11:43:46.842894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:12.330 [2024-07-13 11:43:46.843040] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:32:12.330 [2024-07-13 11:43:46.843160] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:12.330 [2024-07-13 11:43:46.845692] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:12.330 [2024-07-13 11:43:46.845859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:12.330 pt2 00:32:12.330 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:12.330 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:12.330 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:32:12.330 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:32:12.330 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:32:12.330 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:12.330 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:12.330 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:12.330 11:43:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:32:12.330 malloc3 00:32:12.589 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:12.589 [2024-07-13 11:43:47.268004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:12.589 [2024-07-13 11:43:47.268261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:12.589 [2024-07-13 11:43:47.268330] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:32:12.589 [2024-07-13 11:43:47.268582] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:12.589 [2024-07-13 11:43:47.270845] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:12.589 [2024-07-13 11:43:47.271076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:12.589 pt3 00:32:12.589 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:12.589 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:12.589 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:32:12.847 [2024-07-13 11:43:47.472083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:12.847 [2024-07-13 11:43:47.474164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:12.847 [2024-07-13 11:43:47.474373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:12.847 [2024-07-13 11:43:47.474688] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:32:12.847 [2024-07-13 11:43:47.474823] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:12.847 [2024-07-13 11:43:47.475008] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:32:12.847 [2024-07-13 11:43:47.479297] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:32:12.847 [2024-07-13 11:43:47.479417] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:32:12.847 [2024-07-13 11:43:47.479671] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.847 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:12.847 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:12.847 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:12.847 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:12.847 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:12.847 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:12.847 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:12.847 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:12.848 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:12.848 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:12.848 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.848 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.107 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:13.107 "name": "raid_bdev1", 00:32:13.107 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:13.107 "strip_size_kb": 64, 00:32:13.107 "state": "online", 00:32:13.107 "raid_level": "raid5f", 00:32:13.107 "superblock": true, 00:32:13.107 "num_base_bdevs": 3, 00:32:13.107 "num_base_bdevs_discovered": 3, 00:32:13.107 "num_base_bdevs_operational": 3, 00:32:13.107 "base_bdevs_list": [ 00:32:13.107 { 00:32:13.107 "name": "pt1", 00:32:13.107 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:13.107 "is_configured": true, 00:32:13.107 "data_offset": 2048, 00:32:13.107 "data_size": 63488 00:32:13.107 }, 00:32:13.107 { 00:32:13.107 "name": "pt2", 00:32:13.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:13.107 "is_configured": true, 00:32:13.107 "data_offset": 2048, 00:32:13.107 "data_size": 63488 00:32:13.107 }, 00:32:13.107 { 00:32:13.107 "name": "pt3", 00:32:13.107 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:13.107 "is_configured": true, 00:32:13.107 "data_offset": 2048, 00:32:13.107 "data_size": 63488 00:32:13.107 } 00:32:13.107 ] 00:32:13.107 }' 00:32:13.107 11:43:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:13.107 11:43:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:13.675 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:32:13.675 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:13.675 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:13.675 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:13.675 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:13.675 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:13.675 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:13.675 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:13.933 [2024-07-13 11:43:48.525108] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:13.933 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:13.933 "name": "raid_bdev1", 00:32:13.933 "aliases": [ 00:32:13.933 "9b886b18-0049-4943-8472-e722a57a3a5b" 00:32:13.933 ], 00:32:13.933 "product_name": "Raid Volume", 00:32:13.933 "block_size": 512, 00:32:13.933 "num_blocks": 126976, 00:32:13.933 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:13.933 "assigned_rate_limits": { 00:32:13.933 "rw_ios_per_sec": 0, 00:32:13.933 "rw_mbytes_per_sec": 0, 00:32:13.933 "r_mbytes_per_sec": 0, 00:32:13.933 "w_mbytes_per_sec": 0 00:32:13.933 }, 00:32:13.933 "claimed": false, 00:32:13.933 "zoned": false, 00:32:13.933 "supported_io_types": { 00:32:13.933 "read": true, 00:32:13.933 "write": true, 00:32:13.933 "unmap": false, 00:32:13.933 "flush": false, 00:32:13.933 "reset": true, 00:32:13.933 "nvme_admin": false, 00:32:13.933 "nvme_io": false, 00:32:13.933 "nvme_io_md": false, 00:32:13.933 "write_zeroes": true, 00:32:13.933 "zcopy": false, 00:32:13.933 "get_zone_info": false, 00:32:13.933 "zone_management": false, 00:32:13.933 "zone_append": false, 00:32:13.933 "compare": false, 00:32:13.934 "compare_and_write": false, 00:32:13.934 "abort": false, 00:32:13.934 "seek_hole": false, 00:32:13.934 "seek_data": false, 00:32:13.934 "copy": false, 00:32:13.934 "nvme_iov_md": false 00:32:13.934 }, 00:32:13.934 "driver_specific": { 00:32:13.934 "raid": { 00:32:13.934 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:13.934 "strip_size_kb": 64, 00:32:13.934 "state": "online", 00:32:13.934 "raid_level": "raid5f", 00:32:13.934 "superblock": true, 00:32:13.934 "num_base_bdevs": 3, 00:32:13.934 "num_base_bdevs_discovered": 3, 00:32:13.934 "num_base_bdevs_operational": 3, 00:32:13.934 "base_bdevs_list": [ 00:32:13.934 { 00:32:13.934 "name": "pt1", 00:32:13.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:13.934 "is_configured": true, 00:32:13.934 "data_offset": 2048, 00:32:13.934 "data_size": 63488 00:32:13.934 }, 00:32:13.934 { 00:32:13.934 "name": "pt2", 00:32:13.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:13.934 "is_configured": true, 00:32:13.934 "data_offset": 2048, 00:32:13.934 "data_size": 63488 00:32:13.934 }, 00:32:13.934 { 00:32:13.934 "name": "pt3", 00:32:13.934 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:13.934 "is_configured": true, 00:32:13.934 "data_offset": 2048, 00:32:13.934 "data_size": 63488 00:32:13.934 } 00:32:13.934 ] 00:32:13.934 } 00:32:13.934 } 00:32:13.934 }' 00:32:13.934 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:13.934 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:13.934 pt2 00:32:13.934 pt3' 00:32:13.934 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:13.934 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:13.934 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:14.192 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:14.192 "name": "pt1", 00:32:14.192 "aliases": [ 00:32:14.192 "00000000-0000-0000-0000-000000000001" 00:32:14.192 ], 00:32:14.192 "product_name": "passthru", 00:32:14.192 "block_size": 512, 00:32:14.192 "num_blocks": 65536, 00:32:14.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:14.192 "assigned_rate_limits": { 00:32:14.192 "rw_ios_per_sec": 0, 00:32:14.192 "rw_mbytes_per_sec": 0, 00:32:14.192 "r_mbytes_per_sec": 0, 00:32:14.192 "w_mbytes_per_sec": 0 00:32:14.192 }, 00:32:14.193 "claimed": true, 00:32:14.193 "claim_type": "exclusive_write", 00:32:14.193 "zoned": false, 00:32:14.193 "supported_io_types": { 00:32:14.193 "read": true, 00:32:14.193 "write": true, 00:32:14.193 "unmap": true, 00:32:14.193 "flush": true, 00:32:14.193 "reset": true, 00:32:14.193 "nvme_admin": false, 00:32:14.193 "nvme_io": false, 00:32:14.193 "nvme_io_md": false, 00:32:14.193 "write_zeroes": true, 00:32:14.193 "zcopy": true, 00:32:14.193 "get_zone_info": false, 00:32:14.193 "zone_management": false, 00:32:14.193 "zone_append": false, 00:32:14.193 "compare": false, 00:32:14.193 "compare_and_write": false, 00:32:14.193 "abort": true, 00:32:14.193 "seek_hole": false, 00:32:14.193 "seek_data": false, 00:32:14.193 "copy": true, 00:32:14.193 "nvme_iov_md": false 00:32:14.193 }, 00:32:14.193 "memory_domains": [ 00:32:14.193 { 00:32:14.193 "dma_device_id": "system", 00:32:14.193 "dma_device_type": 1 00:32:14.193 }, 00:32:14.193 { 00:32:14.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.193 "dma_device_type": 2 00:32:14.193 } 00:32:14.193 ], 00:32:14.193 "driver_specific": { 00:32:14.193 "passthru": { 00:32:14.193 "name": "pt1", 00:32:14.193 "base_bdev_name": "malloc1" 00:32:14.193 } 00:32:14.193 } 00:32:14.193 }' 00:32:14.193 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:14.193 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:14.193 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:14.193 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:14.193 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:14.451 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:14.451 11:43:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:14.451 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:14.451 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:14.451 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:14.451 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:14.451 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:14.451 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:14.451 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:14.451 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:14.710 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:14.710 "name": "pt2", 00:32:14.710 "aliases": [ 00:32:14.710 "00000000-0000-0000-0000-000000000002" 00:32:14.710 ], 00:32:14.710 "product_name": "passthru", 00:32:14.710 "block_size": 512, 00:32:14.710 "num_blocks": 65536, 00:32:14.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:14.710 "assigned_rate_limits": { 00:32:14.710 "rw_ios_per_sec": 0, 00:32:14.710 "rw_mbytes_per_sec": 0, 00:32:14.710 "r_mbytes_per_sec": 0, 00:32:14.710 "w_mbytes_per_sec": 0 00:32:14.710 }, 00:32:14.710 "claimed": true, 00:32:14.710 "claim_type": "exclusive_write", 00:32:14.710 "zoned": false, 00:32:14.710 "supported_io_types": { 00:32:14.710 "read": true, 00:32:14.710 "write": true, 00:32:14.710 "unmap": true, 00:32:14.710 "flush": true, 00:32:14.710 "reset": true, 00:32:14.710 "nvme_admin": false, 00:32:14.710 "nvme_io": false, 00:32:14.710 "nvme_io_md": false, 00:32:14.710 "write_zeroes": true, 00:32:14.710 "zcopy": true, 00:32:14.710 "get_zone_info": false, 00:32:14.710 "zone_management": false, 00:32:14.710 "zone_append": false, 00:32:14.710 "compare": false, 00:32:14.710 "compare_and_write": false, 00:32:14.710 "abort": true, 00:32:14.710 "seek_hole": false, 00:32:14.710 "seek_data": false, 00:32:14.710 "copy": true, 00:32:14.710 "nvme_iov_md": false 00:32:14.710 }, 00:32:14.710 "memory_domains": [ 00:32:14.710 { 00:32:14.710 "dma_device_id": "system", 00:32:14.710 "dma_device_type": 1 00:32:14.710 }, 00:32:14.710 { 00:32:14.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:14.710 "dma_device_type": 2 00:32:14.710 } 00:32:14.710 ], 00:32:14.710 "driver_specific": { 00:32:14.710 "passthru": { 00:32:14.710 "name": "pt2", 00:32:14.710 "base_bdev_name": "malloc2" 00:32:14.710 } 00:32:14.710 } 00:32:14.710 }' 00:32:14.710 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:14.710 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:14.968 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:14.968 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:14.968 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:14.968 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:14.968 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:14.968 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:14.968 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:14.968 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.227 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.227 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:15.227 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:15.227 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:32:15.227 11:43:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:15.486 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:15.486 "name": "pt3", 00:32:15.486 "aliases": [ 00:32:15.486 "00000000-0000-0000-0000-000000000003" 00:32:15.486 ], 00:32:15.486 "product_name": "passthru", 00:32:15.486 "block_size": 512, 00:32:15.486 "num_blocks": 65536, 00:32:15.486 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:15.486 "assigned_rate_limits": { 00:32:15.486 "rw_ios_per_sec": 0, 00:32:15.486 "rw_mbytes_per_sec": 0, 00:32:15.486 "r_mbytes_per_sec": 0, 00:32:15.486 "w_mbytes_per_sec": 0 00:32:15.486 }, 00:32:15.486 "claimed": true, 00:32:15.486 "claim_type": "exclusive_write", 00:32:15.486 "zoned": false, 00:32:15.486 "supported_io_types": { 00:32:15.486 "read": true, 00:32:15.486 "write": true, 00:32:15.486 "unmap": true, 00:32:15.486 "flush": true, 00:32:15.486 "reset": true, 00:32:15.486 "nvme_admin": false, 00:32:15.486 "nvme_io": false, 00:32:15.486 "nvme_io_md": false, 00:32:15.486 "write_zeroes": true, 00:32:15.486 "zcopy": true, 00:32:15.486 "get_zone_info": false, 00:32:15.486 "zone_management": false, 00:32:15.486 "zone_append": false, 00:32:15.486 "compare": false, 00:32:15.486 "compare_and_write": false, 00:32:15.486 "abort": true, 00:32:15.486 "seek_hole": false, 00:32:15.486 "seek_data": false, 00:32:15.486 "copy": true, 00:32:15.486 "nvme_iov_md": false 00:32:15.486 }, 00:32:15.486 "memory_domains": [ 00:32:15.486 { 00:32:15.486 "dma_device_id": "system", 00:32:15.486 "dma_device_type": 1 00:32:15.486 }, 00:32:15.486 { 00:32:15.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:15.486 "dma_device_type": 2 00:32:15.486 } 00:32:15.486 ], 00:32:15.486 "driver_specific": { 00:32:15.486 "passthru": { 00:32:15.486 "name": "pt3", 00:32:15.486 "base_bdev_name": "malloc3" 00:32:15.486 } 00:32:15.486 } 00:32:15.486 }' 00:32:15.486 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:15.486 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:15.486 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:15.486 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:15.486 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:15.744 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:15.744 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:15.744 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:15.744 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:15.744 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.744 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:15.744 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:15.744 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:15.744 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:32:16.002 [2024-07-13 11:43:50.641477] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:16.002 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9b886b18-0049-4943-8472-e722a57a3a5b 00:32:16.002 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9b886b18-0049-4943-8472-e722a57a3a5b ']' 00:32:16.002 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:16.260 [2024-07-13 11:43:50.837409] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:16.260 [2024-07-13 11:43:50.837527] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:16.260 [2024-07-13 11:43:50.837712] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:16.260 [2024-07-13 11:43:50.837914] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:16.260 [2024-07-13 11:43:50.838011] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:32:16.260 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:16.260 11:43:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:32:16.517 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:32:16.517 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:32:16.517 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:16.517 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:16.775 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:16.775 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:17.033 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:17.033 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:32:17.033 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:17.033 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:17.291 11:43:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:32:17.550 [2024-07-13 11:43:52.127886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:17.550 [2024-07-13 11:43:52.129973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:17.550 [2024-07-13 11:43:52.130238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:32:17.550 [2024-07-13 11:43:52.130468] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:17.550 [2024-07-13 11:43:52.130745] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:17.550 [2024-07-13 11:43:52.130981] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:32:17.550 [2024-07-13 11:43:52.131184] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:17.550 [2024-07-13 11:43:52.131351] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state configuring 00:32:17.550 request: 00:32:17.550 { 00:32:17.550 "name": "raid_bdev1", 00:32:17.550 "raid_level": "raid5f", 00:32:17.550 "base_bdevs": [ 00:32:17.550 "malloc1", 00:32:17.550 "malloc2", 00:32:17.550 "malloc3" 00:32:17.550 ], 00:32:17.550 "strip_size_kb": 64, 00:32:17.550 "superblock": false, 00:32:17.550 "method": "bdev_raid_create", 00:32:17.550 "req_id": 1 00:32:17.550 } 00:32:17.550 Got JSON-RPC error response 00:32:17.550 response: 00:32:17.550 { 00:32:17.550 "code": -17, 00:32:17.550 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:17.550 } 00:32:17.550 11:43:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:32:17.550 11:43:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:17.550 11:43:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:17.550 11:43:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:17.550 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.550 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:32:17.809 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:32:17.809 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:32:17.809 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:17.809 [2024-07-13 11:43:52.552342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:17.809 [2024-07-13 11:43:52.552683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:17.809 [2024-07-13 11:43:52.552891] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:17.809 [2024-07-13 11:43:52.553091] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:17.809 [2024-07-13 11:43:52.555540] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:17.809 [2024-07-13 11:43:52.555801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:17.809 [2024-07-13 11:43:52.556089] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:17.809 [2024-07-13 11:43:52.556311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:17.809 pt1 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:18.068 "name": "raid_bdev1", 00:32:18.068 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:18.068 "strip_size_kb": 64, 00:32:18.068 "state": "configuring", 00:32:18.068 "raid_level": "raid5f", 00:32:18.068 "superblock": true, 00:32:18.068 "num_base_bdevs": 3, 00:32:18.068 "num_base_bdevs_discovered": 1, 00:32:18.068 "num_base_bdevs_operational": 3, 00:32:18.068 "base_bdevs_list": [ 00:32:18.068 { 00:32:18.068 "name": "pt1", 00:32:18.068 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:18.068 "is_configured": true, 00:32:18.068 "data_offset": 2048, 00:32:18.068 "data_size": 63488 00:32:18.068 }, 00:32:18.068 { 00:32:18.068 "name": null, 00:32:18.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:18.068 "is_configured": false, 00:32:18.068 "data_offset": 2048, 00:32:18.068 "data_size": 63488 00:32:18.068 }, 00:32:18.068 { 00:32:18.068 "name": null, 00:32:18.068 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:18.068 "is_configured": false, 00:32:18.068 "data_offset": 2048, 00:32:18.068 "data_size": 63488 00:32:18.068 } 00:32:18.068 ] 00:32:18.068 }' 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:18.068 11:43:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:18.646 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:32:18.646 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:18.904 [2024-07-13 11:43:53.592888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:18.904 [2024-07-13 11:43:53.593167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:18.904 [2024-07-13 11:43:53.593379] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:32:18.904 [2024-07-13 11:43:53.593576] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:18.904 [2024-07-13 11:43:53.594217] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:18.904 [2024-07-13 11:43:53.594463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:18.904 [2024-07-13 11:43:53.594739] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:18.904 [2024-07-13 11:43:53.594966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:18.904 pt2 00:32:18.904 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:19.163 [2024-07-13 11:43:53.849007] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.163 11:43:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.421 11:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:19.421 "name": "raid_bdev1", 00:32:19.421 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:19.421 "strip_size_kb": 64, 00:32:19.421 "state": "configuring", 00:32:19.421 "raid_level": "raid5f", 00:32:19.421 "superblock": true, 00:32:19.421 "num_base_bdevs": 3, 00:32:19.421 "num_base_bdevs_discovered": 1, 00:32:19.421 "num_base_bdevs_operational": 3, 00:32:19.421 "base_bdevs_list": [ 00:32:19.421 { 00:32:19.421 "name": "pt1", 00:32:19.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:19.421 "is_configured": true, 00:32:19.421 "data_offset": 2048, 00:32:19.421 "data_size": 63488 00:32:19.421 }, 00:32:19.421 { 00:32:19.421 "name": null, 00:32:19.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:19.421 "is_configured": false, 00:32:19.421 "data_offset": 2048, 00:32:19.421 "data_size": 63488 00:32:19.421 }, 00:32:19.421 { 00:32:19.421 "name": null, 00:32:19.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:19.421 "is_configured": false, 00:32:19.421 "data_offset": 2048, 00:32:19.421 "data_size": 63488 00:32:19.421 } 00:32:19.421 ] 00:32:19.421 }' 00:32:19.421 11:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:19.422 11:43:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.988 11:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:32:19.988 11:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:19.988 11:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:20.247 [2024-07-13 11:43:54.889581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:20.247 [2024-07-13 11:43:54.889941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.247 [2024-07-13 11:43:54.890145] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:20.247 [2024-07-13 11:43:54.890336] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.247 [2024-07-13 11:43:54.890969] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.247 [2024-07-13 11:43:54.891216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:20.247 [2024-07-13 11:43:54.891505] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:20.247 [2024-07-13 11:43:54.891703] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:20.247 pt2 00:32:20.247 11:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:32:20.247 11:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:20.247 11:43:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:20.505 [2024-07-13 11:43:55.141681] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:20.505 [2024-07-13 11:43:55.142032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.505 [2024-07-13 11:43:55.142273] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:20.505 [2024-07-13 11:43:55.142470] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.505 [2024-07-13 11:43:55.143121] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.505 [2024-07-13 11:43:55.143356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:20.505 [2024-07-13 11:43:55.143630] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:20.505 [2024-07-13 11:43:55.143869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:20.505 [2024-07-13 11:43:55.144179] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:32:20.505 [2024-07-13 11:43:55.144379] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:20.505 [2024-07-13 11:43:55.144626] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:20.505 [2024-07-13 11:43:55.149142] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:32:20.505 [2024-07-13 11:43:55.149346] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:32:20.505 [2024-07-13 11:43:55.149711] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:20.505 pt3 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.506 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.764 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:20.764 "name": "raid_bdev1", 00:32:20.764 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:20.764 "strip_size_kb": 64, 00:32:20.764 "state": "online", 00:32:20.764 "raid_level": "raid5f", 00:32:20.764 "superblock": true, 00:32:20.764 "num_base_bdevs": 3, 00:32:20.764 "num_base_bdevs_discovered": 3, 00:32:20.764 "num_base_bdevs_operational": 3, 00:32:20.764 "base_bdevs_list": [ 00:32:20.764 { 00:32:20.764 "name": "pt1", 00:32:20.764 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:20.764 "is_configured": true, 00:32:20.764 "data_offset": 2048, 00:32:20.764 "data_size": 63488 00:32:20.764 }, 00:32:20.764 { 00:32:20.764 "name": "pt2", 00:32:20.764 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:20.764 "is_configured": true, 00:32:20.764 "data_offset": 2048, 00:32:20.764 "data_size": 63488 00:32:20.764 }, 00:32:20.764 { 00:32:20.764 "name": "pt3", 00:32:20.764 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:20.764 "is_configured": true, 00:32:20.764 "data_offset": 2048, 00:32:20.764 "data_size": 63488 00:32:20.764 } 00:32:20.764 ] 00:32:20.764 }' 00:32:20.764 11:43:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:20.764 11:43:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.329 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:32:21.330 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:21.330 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:21.330 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:21.330 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:21.330 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:21.330 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:21.330 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:21.588 [2024-07-13 11:43:56.295186] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:21.588 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:21.588 "name": "raid_bdev1", 00:32:21.588 "aliases": [ 00:32:21.588 "9b886b18-0049-4943-8472-e722a57a3a5b" 00:32:21.588 ], 00:32:21.588 "product_name": "Raid Volume", 00:32:21.588 "block_size": 512, 00:32:21.588 "num_blocks": 126976, 00:32:21.588 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:21.588 "assigned_rate_limits": { 00:32:21.588 "rw_ios_per_sec": 0, 00:32:21.588 "rw_mbytes_per_sec": 0, 00:32:21.588 "r_mbytes_per_sec": 0, 00:32:21.588 "w_mbytes_per_sec": 0 00:32:21.588 }, 00:32:21.588 "claimed": false, 00:32:21.588 "zoned": false, 00:32:21.588 "supported_io_types": { 00:32:21.588 "read": true, 00:32:21.588 "write": true, 00:32:21.588 "unmap": false, 00:32:21.588 "flush": false, 00:32:21.588 "reset": true, 00:32:21.588 "nvme_admin": false, 00:32:21.588 "nvme_io": false, 00:32:21.588 "nvme_io_md": false, 00:32:21.588 "write_zeroes": true, 00:32:21.588 "zcopy": false, 00:32:21.588 "get_zone_info": false, 00:32:21.588 "zone_management": false, 00:32:21.588 "zone_append": false, 00:32:21.588 "compare": false, 00:32:21.588 "compare_and_write": false, 00:32:21.588 "abort": false, 00:32:21.588 "seek_hole": false, 00:32:21.588 "seek_data": false, 00:32:21.588 "copy": false, 00:32:21.588 "nvme_iov_md": false 00:32:21.588 }, 00:32:21.588 "driver_specific": { 00:32:21.588 "raid": { 00:32:21.588 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:21.588 "strip_size_kb": 64, 00:32:21.588 "state": "online", 00:32:21.588 "raid_level": "raid5f", 00:32:21.588 "superblock": true, 00:32:21.588 "num_base_bdevs": 3, 00:32:21.588 "num_base_bdevs_discovered": 3, 00:32:21.588 "num_base_bdevs_operational": 3, 00:32:21.588 "base_bdevs_list": [ 00:32:21.588 { 00:32:21.588 "name": "pt1", 00:32:21.588 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:21.588 "is_configured": true, 00:32:21.588 "data_offset": 2048, 00:32:21.588 "data_size": 63488 00:32:21.588 }, 00:32:21.588 { 00:32:21.588 "name": "pt2", 00:32:21.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:21.588 "is_configured": true, 00:32:21.588 "data_offset": 2048, 00:32:21.588 "data_size": 63488 00:32:21.588 }, 00:32:21.588 { 00:32:21.588 "name": "pt3", 00:32:21.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:21.588 "is_configured": true, 00:32:21.588 "data_offset": 2048, 00:32:21.588 "data_size": 63488 00:32:21.588 } 00:32:21.588 ] 00:32:21.588 } 00:32:21.588 } 00:32:21.588 }' 00:32:21.588 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:21.846 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:21.846 pt2 00:32:21.846 pt3' 00:32:21.846 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:21.846 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:21.846 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:22.104 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:22.104 "name": "pt1", 00:32:22.104 "aliases": [ 00:32:22.104 "00000000-0000-0000-0000-000000000001" 00:32:22.104 ], 00:32:22.104 "product_name": "passthru", 00:32:22.104 "block_size": 512, 00:32:22.104 "num_blocks": 65536, 00:32:22.104 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:22.104 "assigned_rate_limits": { 00:32:22.104 "rw_ios_per_sec": 0, 00:32:22.104 "rw_mbytes_per_sec": 0, 00:32:22.104 "r_mbytes_per_sec": 0, 00:32:22.104 "w_mbytes_per_sec": 0 00:32:22.104 }, 00:32:22.104 "claimed": true, 00:32:22.104 "claim_type": "exclusive_write", 00:32:22.104 "zoned": false, 00:32:22.104 "supported_io_types": { 00:32:22.104 "read": true, 00:32:22.104 "write": true, 00:32:22.104 "unmap": true, 00:32:22.104 "flush": true, 00:32:22.104 "reset": true, 00:32:22.104 "nvme_admin": false, 00:32:22.104 "nvme_io": false, 00:32:22.104 "nvme_io_md": false, 00:32:22.104 "write_zeroes": true, 00:32:22.104 "zcopy": true, 00:32:22.104 "get_zone_info": false, 00:32:22.104 "zone_management": false, 00:32:22.104 "zone_append": false, 00:32:22.104 "compare": false, 00:32:22.104 "compare_and_write": false, 00:32:22.104 "abort": true, 00:32:22.104 "seek_hole": false, 00:32:22.104 "seek_data": false, 00:32:22.104 "copy": true, 00:32:22.104 "nvme_iov_md": false 00:32:22.104 }, 00:32:22.104 "memory_domains": [ 00:32:22.104 { 00:32:22.104 "dma_device_id": "system", 00:32:22.104 "dma_device_type": 1 00:32:22.104 }, 00:32:22.104 { 00:32:22.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.104 "dma_device_type": 2 00:32:22.104 } 00:32:22.104 ], 00:32:22.104 "driver_specific": { 00:32:22.104 "passthru": { 00:32:22.104 "name": "pt1", 00:32:22.104 "base_bdev_name": "malloc1" 00:32:22.104 } 00:32:22.104 } 00:32:22.105 }' 00:32:22.105 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.105 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.105 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:22.105 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.105 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.362 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:22.362 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.362 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:22.362 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:22.362 11:43:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.362 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:22.362 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:22.362 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:22.362 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:22.362 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:22.621 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:22.621 "name": "pt2", 00:32:22.621 "aliases": [ 00:32:22.621 "00000000-0000-0000-0000-000000000002" 00:32:22.621 ], 00:32:22.621 "product_name": "passthru", 00:32:22.621 "block_size": 512, 00:32:22.621 "num_blocks": 65536, 00:32:22.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:22.621 "assigned_rate_limits": { 00:32:22.621 "rw_ios_per_sec": 0, 00:32:22.621 "rw_mbytes_per_sec": 0, 00:32:22.621 "r_mbytes_per_sec": 0, 00:32:22.621 "w_mbytes_per_sec": 0 00:32:22.621 }, 00:32:22.621 "claimed": true, 00:32:22.621 "claim_type": "exclusive_write", 00:32:22.621 "zoned": false, 00:32:22.621 "supported_io_types": { 00:32:22.621 "read": true, 00:32:22.621 "write": true, 00:32:22.621 "unmap": true, 00:32:22.621 "flush": true, 00:32:22.621 "reset": true, 00:32:22.621 "nvme_admin": false, 00:32:22.621 "nvme_io": false, 00:32:22.621 "nvme_io_md": false, 00:32:22.621 "write_zeroes": true, 00:32:22.621 "zcopy": true, 00:32:22.621 "get_zone_info": false, 00:32:22.621 "zone_management": false, 00:32:22.621 "zone_append": false, 00:32:22.621 "compare": false, 00:32:22.621 "compare_and_write": false, 00:32:22.621 "abort": true, 00:32:22.621 "seek_hole": false, 00:32:22.621 "seek_data": false, 00:32:22.621 "copy": true, 00:32:22.621 "nvme_iov_md": false 00:32:22.621 }, 00:32:22.621 "memory_domains": [ 00:32:22.621 { 00:32:22.621 "dma_device_id": "system", 00:32:22.621 "dma_device_type": 1 00:32:22.621 }, 00:32:22.621 { 00:32:22.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.621 "dma_device_type": 2 00:32:22.621 } 00:32:22.621 ], 00:32:22.621 "driver_specific": { 00:32:22.621 "passthru": { 00:32:22.621 "name": "pt2", 00:32:22.621 "base_bdev_name": "malloc2" 00:32:22.621 } 00:32:22.621 } 00:32:22.621 }' 00:32:22.621 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.879 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:22.879 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:22.879 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.879 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:22.879 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:22.879 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.137 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.137 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:23.137 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.137 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.137 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:23.137 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:23.137 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:32:23.137 11:43:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:23.395 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:23.395 "name": "pt3", 00:32:23.395 "aliases": [ 00:32:23.395 "00000000-0000-0000-0000-000000000003" 00:32:23.395 ], 00:32:23.395 "product_name": "passthru", 00:32:23.395 "block_size": 512, 00:32:23.395 "num_blocks": 65536, 00:32:23.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:23.395 "assigned_rate_limits": { 00:32:23.395 "rw_ios_per_sec": 0, 00:32:23.395 "rw_mbytes_per_sec": 0, 00:32:23.395 "r_mbytes_per_sec": 0, 00:32:23.395 "w_mbytes_per_sec": 0 00:32:23.395 }, 00:32:23.395 "claimed": true, 00:32:23.395 "claim_type": "exclusive_write", 00:32:23.395 "zoned": false, 00:32:23.395 "supported_io_types": { 00:32:23.395 "read": true, 00:32:23.395 "write": true, 00:32:23.395 "unmap": true, 00:32:23.395 "flush": true, 00:32:23.395 "reset": true, 00:32:23.395 "nvme_admin": false, 00:32:23.395 "nvme_io": false, 00:32:23.395 "nvme_io_md": false, 00:32:23.395 "write_zeroes": true, 00:32:23.395 "zcopy": true, 00:32:23.395 "get_zone_info": false, 00:32:23.395 "zone_management": false, 00:32:23.395 "zone_append": false, 00:32:23.395 "compare": false, 00:32:23.395 "compare_and_write": false, 00:32:23.395 "abort": true, 00:32:23.395 "seek_hole": false, 00:32:23.395 "seek_data": false, 00:32:23.395 "copy": true, 00:32:23.395 "nvme_iov_md": false 00:32:23.395 }, 00:32:23.395 "memory_domains": [ 00:32:23.395 { 00:32:23.395 "dma_device_id": "system", 00:32:23.395 "dma_device_type": 1 00:32:23.395 }, 00:32:23.395 { 00:32:23.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:23.395 "dma_device_type": 2 00:32:23.395 } 00:32:23.395 ], 00:32:23.395 "driver_specific": { 00:32:23.395 "passthru": { 00:32:23.395 "name": "pt3", 00:32:23.395 "base_bdev_name": "malloc3" 00:32:23.395 } 00:32:23.395 } 00:32:23.395 }' 00:32:23.395 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:23.395 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:23.654 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:23.654 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:23.654 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:23.654 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:23.654 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.654 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:23.913 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:23.913 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.913 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:23.913 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:23.913 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:23.913 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:32:24.172 [2024-07-13 11:43:58.692612] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9b886b18-0049-4943-8472-e722a57a3a5b '!=' 9b886b18-0049-4943-8472-e722a57a3a5b ']' 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:24.172 [2024-07-13 11:43:58.884559] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.172 11:43:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.430 11:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:24.430 "name": "raid_bdev1", 00:32:24.430 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:24.430 "strip_size_kb": 64, 00:32:24.430 "state": "online", 00:32:24.430 "raid_level": "raid5f", 00:32:24.430 "superblock": true, 00:32:24.430 "num_base_bdevs": 3, 00:32:24.430 "num_base_bdevs_discovered": 2, 00:32:24.430 "num_base_bdevs_operational": 2, 00:32:24.430 "base_bdevs_list": [ 00:32:24.430 { 00:32:24.430 "name": null, 00:32:24.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.430 "is_configured": false, 00:32:24.430 "data_offset": 2048, 00:32:24.430 "data_size": 63488 00:32:24.430 }, 00:32:24.430 { 00:32:24.430 "name": "pt2", 00:32:24.430 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:24.430 "is_configured": true, 00:32:24.430 "data_offset": 2048, 00:32:24.430 "data_size": 63488 00:32:24.430 }, 00:32:24.430 { 00:32:24.430 "name": "pt3", 00:32:24.430 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:24.430 "is_configured": true, 00:32:24.430 "data_offset": 2048, 00:32:24.430 "data_size": 63488 00:32:24.430 } 00:32:24.430 ] 00:32:24.430 }' 00:32:24.430 11:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:24.430 11:43:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.366 11:43:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:25.366 [2024-07-13 11:44:00.002861] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:25.366 [2024-07-13 11:44:00.003168] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:25.366 [2024-07-13 11:44:00.003409] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:25.366 [2024-07-13 11:44:00.003645] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:25.366 [2024-07-13 11:44:00.003829] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:32:25.366 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.366 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:32:25.625 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:32:25.625 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:32:25.625 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:32:25.625 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:25.625 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:25.884 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:32:25.884 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:25.884 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:32:26.143 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:32:26.143 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:32:26.143 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:32:26.143 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:32:26.143 11:44:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:26.401 [2024-07-13 11:44:00.999076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:26.401 [2024-07-13 11:44:00.999459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:26.401 [2024-07-13 11:44:00.999668] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:26.401 [2024-07-13 11:44:00.999870] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:26.401 [2024-07-13 11:44:01.002139] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:26.401 [2024-07-13 11:44:01.002369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:26.401 [2024-07-13 11:44:01.002655] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:26.401 [2024-07-13 11:44:01.002889] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:26.401 pt2 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.401 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.660 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:26.660 "name": "raid_bdev1", 00:32:26.660 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:26.660 "strip_size_kb": 64, 00:32:26.660 "state": "configuring", 00:32:26.660 "raid_level": "raid5f", 00:32:26.660 "superblock": true, 00:32:26.660 "num_base_bdevs": 3, 00:32:26.660 "num_base_bdevs_discovered": 1, 00:32:26.660 "num_base_bdevs_operational": 2, 00:32:26.660 "base_bdevs_list": [ 00:32:26.660 { 00:32:26.660 "name": null, 00:32:26.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.660 "is_configured": false, 00:32:26.660 "data_offset": 2048, 00:32:26.660 "data_size": 63488 00:32:26.660 }, 00:32:26.660 { 00:32:26.660 "name": "pt2", 00:32:26.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:26.660 "is_configured": true, 00:32:26.660 "data_offset": 2048, 00:32:26.660 "data_size": 63488 00:32:26.660 }, 00:32:26.660 { 00:32:26.660 "name": null, 00:32:26.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:26.660 "is_configured": false, 00:32:26.660 "data_offset": 2048, 00:32:26.660 "data_size": 63488 00:32:26.660 } 00:32:26.660 ] 00:32:26.660 }' 00:32:26.660 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:26.660 11:44:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:27.226 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:32:27.226 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:32:27.226 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:32:27.226 11:44:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:27.484 [2024-07-13 11:44:02.131525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:27.485 [2024-07-13 11:44:02.131900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.485 [2024-07-13 11:44:02.132113] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:32:27.485 [2024-07-13 11:44:02.132305] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.485 [2024-07-13 11:44:02.132904] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.485 [2024-07-13 11:44:02.133144] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:27.485 [2024-07-13 11:44:02.133427] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:27.485 [2024-07-13 11:44:02.133616] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:27.485 [2024-07-13 11:44:02.133916] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b180 00:32:27.485 [2024-07-13 11:44:02.134118] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:27.485 [2024-07-13 11:44:02.134377] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:32:27.485 [2024-07-13 11:44:02.138557] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b180 00:32:27.485 [2024-07-13 11:44:02.138757] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b180 00:32:27.485 [2024-07-13 11:44:02.139264] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:27.485 pt3 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.485 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.743 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:27.743 "name": "raid_bdev1", 00:32:27.743 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:27.743 "strip_size_kb": 64, 00:32:27.743 "state": "online", 00:32:27.743 "raid_level": "raid5f", 00:32:27.743 "superblock": true, 00:32:27.743 "num_base_bdevs": 3, 00:32:27.743 "num_base_bdevs_discovered": 2, 00:32:27.743 "num_base_bdevs_operational": 2, 00:32:27.743 "base_bdevs_list": [ 00:32:27.743 { 00:32:27.743 "name": null, 00:32:27.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.743 "is_configured": false, 00:32:27.743 "data_offset": 2048, 00:32:27.743 "data_size": 63488 00:32:27.743 }, 00:32:27.743 { 00:32:27.743 "name": "pt2", 00:32:27.743 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:27.744 "is_configured": true, 00:32:27.744 "data_offset": 2048, 00:32:27.744 "data_size": 63488 00:32:27.744 }, 00:32:27.744 { 00:32:27.744 "name": "pt3", 00:32:27.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:27.744 "is_configured": true, 00:32:27.744 "data_offset": 2048, 00:32:27.744 "data_size": 63488 00:32:27.744 } 00:32:27.744 ] 00:32:27.744 }' 00:32:27.744 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:27.744 11:44:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.316 11:44:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:28.588 [2024-07-13 11:44:03.244595] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:28.588 [2024-07-13 11:44:03.244805] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:28.588 [2024-07-13 11:44:03.245093] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:28.588 [2024-07-13 11:44:03.245320] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:28.588 [2024-07-13 11:44:03.245491] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state offline 00:32:28.588 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.588 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:32:28.859 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:32:28.859 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:32:28.859 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:32:28.859 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:32:28.859 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:32:29.117 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:29.374 [2024-07-13 11:44:03.903229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:29.374 [2024-07-13 11:44:03.903499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:29.374 [2024-07-13 11:44:03.903691] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:32:29.374 [2024-07-13 11:44:03.903849] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:29.374 [2024-07-13 11:44:03.907120] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:29.374 [2024-07-13 11:44:03.907330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:29.374 [2024-07-13 11:44:03.907611] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:29.374 [2024-07-13 11:44:03.907826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:29.374 [2024-07-13 11:44:03.908217] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:29.374 pt1 00:32:29.374 [2024-07-13 11:44:03.908382] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:29.374 [2024-07-13 11:44:03.908523] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:32:29.374 [2024-07-13 11:44:03.908747] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.374 11:44:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.374 11:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.374 "name": "raid_bdev1", 00:32:29.374 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:29.374 "strip_size_kb": 64, 00:32:29.374 "state": "configuring", 00:32:29.374 "raid_level": "raid5f", 00:32:29.374 "superblock": true, 00:32:29.374 "num_base_bdevs": 3, 00:32:29.374 "num_base_bdevs_discovered": 1, 00:32:29.374 "num_base_bdevs_operational": 2, 00:32:29.374 "base_bdevs_list": [ 00:32:29.374 { 00:32:29.374 "name": null, 00:32:29.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.374 "is_configured": false, 00:32:29.374 "data_offset": 2048, 00:32:29.374 "data_size": 63488 00:32:29.374 }, 00:32:29.374 { 00:32:29.374 "name": "pt2", 00:32:29.374 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:29.374 "is_configured": true, 00:32:29.374 "data_offset": 2048, 00:32:29.374 "data_size": 63488 00:32:29.374 }, 00:32:29.374 { 00:32:29.374 "name": null, 00:32:29.374 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:29.374 "is_configured": false, 00:32:29.374 "data_offset": 2048, 00:32:29.374 "data_size": 63488 00:32:29.374 } 00:32:29.374 ] 00:32:29.374 }' 00:32:29.375 11:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.375 11:44:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.307 11:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:32:30.307 11:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:30.307 11:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:32:30.308 11:44:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:32:30.567 [2024-07-13 11:44:05.148418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:32:30.567 [2024-07-13 11:44:05.148654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.567 [2024-07-13 11:44:05.148826] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:32:30.567 [2024-07-13 11:44:05.148966] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.567 [2024-07-13 11:44:05.149550] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.567 [2024-07-13 11:44:05.149727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:32:30.567 [2024-07-13 11:44:05.149943] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:32:30.567 [2024-07-13 11:44:05.150082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:32:30.567 [2024-07-13 11:44:05.150348] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:32:30.567 [2024-07-13 11:44:05.150466] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:30.567 [2024-07-13 11:44:05.150605] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:32:30.567 [2024-07-13 11:44:05.154762] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:32:30.567 [2024-07-13 11:44:05.154934] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:32:30.567 [2024-07-13 11:44:05.155301] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.567 pt3 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.567 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.825 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:30.825 "name": "raid_bdev1", 00:32:30.825 "uuid": "9b886b18-0049-4943-8472-e722a57a3a5b", 00:32:30.825 "strip_size_kb": 64, 00:32:30.825 "state": "online", 00:32:30.825 "raid_level": "raid5f", 00:32:30.825 "superblock": true, 00:32:30.825 "num_base_bdevs": 3, 00:32:30.826 "num_base_bdevs_discovered": 2, 00:32:30.826 "num_base_bdevs_operational": 2, 00:32:30.826 "base_bdevs_list": [ 00:32:30.826 { 00:32:30.826 "name": null, 00:32:30.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.826 "is_configured": false, 00:32:30.826 "data_offset": 2048, 00:32:30.826 "data_size": 63488 00:32:30.826 }, 00:32:30.826 { 00:32:30.826 "name": "pt2", 00:32:30.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:30.826 "is_configured": true, 00:32:30.826 "data_offset": 2048, 00:32:30.826 "data_size": 63488 00:32:30.826 }, 00:32:30.826 { 00:32:30.826 "name": "pt3", 00:32:30.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:32:30.826 "is_configured": true, 00:32:30.826 "data_offset": 2048, 00:32:30.826 "data_size": 63488 00:32:30.826 } 00:32:30.826 ] 00:32:30.826 }' 00:32:30.826 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:30.826 11:44:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.392 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:32:31.392 11:44:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:31.650 11:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:32:31.651 11:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:31.651 11:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:32:31.909 [2024-07-13 11:44:06.516448] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 9b886b18-0049-4943-8472-e722a57a3a5b '!=' 9b886b18-0049-4943-8472-e722a57a3a5b ']' 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 152981 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 152981 ']' 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 152981 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 152981 00:32:31.909 killing process with pid 152981 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 152981' 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 152981 00:32:31.909 11:44:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 152981 00:32:31.909 [2024-07-13 11:44:06.554590] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:31.909 [2024-07-13 11:44:06.554660] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:31.909 [2024-07-13 11:44:06.554722] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:31.909 [2024-07-13 11:44:06.554773] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:32:32.167 [2024-07-13 11:44:06.744281] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:33.100 ************************************ 00:32:33.100 END TEST raid5f_superblock_test 00:32:33.100 ************************************ 00:32:33.100 11:44:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:32:33.100 00:32:33.100 real 0m22.845s 00:32:33.100 user 0m42.737s 00:32:33.100 sys 0m2.571s 00:32:33.100 11:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:33.100 11:44:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.100 11:44:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:33.100 11:44:07 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:32:33.100 11:44:07 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:32:33.100 11:44:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:33.100 11:44:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:33.100 11:44:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:33.100 ************************************ 00:32:33.100 START TEST raid5f_rebuild_test 00:32:33.100 ************************************ 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 false false true 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=153750 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 153750 /var/tmp/spdk-raid.sock 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 153750 ']' 00:32:33.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:33.100 11:44:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.100 [2024-07-13 11:44:07.807621] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:33.100 [2024-07-13 11:44:07.808041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153750 ] 00:32:33.100 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:33.100 Zero copy mechanism will not be used. 00:32:33.358 [2024-07-13 11:44:07.978292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:33.617 [2024-07-13 11:44:08.226276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:33.875 [2024-07-13 11:44:08.417141] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:34.134 11:44:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:34.134 11:44:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:32:34.134 11:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:34.134 11:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:34.392 BaseBdev1_malloc 00:32:34.392 11:44:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:34.649 [2024-07-13 11:44:09.210391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:34.649 [2024-07-13 11:44:09.210723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.649 [2024-07-13 11:44:09.210800] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:32:34.650 [2024-07-13 11:44:09.211113] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.650 [2024-07-13 11:44:09.213283] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.650 [2024-07-13 11:44:09.213483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:34.650 BaseBdev1 00:32:34.650 11:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:34.650 11:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:34.907 BaseBdev2_malloc 00:32:34.907 11:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:35.165 [2024-07-13 11:44:09.710009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:35.165 [2024-07-13 11:44:09.710275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.165 [2024-07-13 11:44:09.710417] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:32:35.165 [2024-07-13 11:44:09.710539] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.165 [2024-07-13 11:44:09.712458] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.165 [2024-07-13 11:44:09.712622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:35.165 BaseBdev2 00:32:35.165 11:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:35.165 11:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:35.423 BaseBdev3_malloc 00:32:35.423 11:44:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:35.423 [2024-07-13 11:44:10.156102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:35.423 [2024-07-13 11:44:10.156453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.423 [2024-07-13 11:44:10.156527] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:32:35.423 [2024-07-13 11:44:10.156806] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.423 [2024-07-13 11:44:10.159044] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.423 [2024-07-13 11:44:10.159217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:35.423 BaseBdev3 00:32:35.423 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:35.682 spare_malloc 00:32:35.682 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:35.941 spare_delay 00:32:35.941 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:36.200 [2024-07-13 11:44:10.732593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:36.200 [2024-07-13 11:44:10.732811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:36.200 [2024-07-13 11:44:10.732880] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:36.200 [2024-07-13 11:44:10.733118] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:36.200 [2024-07-13 11:44:10.735419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:36.200 [2024-07-13 11:44:10.735580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:36.200 spare 00:32:36.200 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:32:36.459 [2024-07-13 11:44:10.972692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:36.459 [2024-07-13 11:44:10.974725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:36.459 [2024-07-13 11:44:10.974925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:36.459 [2024-07-13 11:44:10.975142] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:32:36.459 [2024-07-13 11:44:10.975253] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:32:36.459 [2024-07-13 11:44:10.975417] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:32:36.459 [2024-07-13 11:44:10.979706] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:32:36.459 [2024-07-13 11:44:10.979836] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:32:36.459 [2024-07-13 11:44:10.980113] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:36.459 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:36.459 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.460 11:44:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:36.460 11:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:36.460 "name": "raid_bdev1", 00:32:36.460 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:36.460 "strip_size_kb": 64, 00:32:36.460 "state": "online", 00:32:36.460 "raid_level": "raid5f", 00:32:36.460 "superblock": false, 00:32:36.460 "num_base_bdevs": 3, 00:32:36.460 "num_base_bdevs_discovered": 3, 00:32:36.460 "num_base_bdevs_operational": 3, 00:32:36.460 "base_bdevs_list": [ 00:32:36.460 { 00:32:36.460 "name": "BaseBdev1", 00:32:36.460 "uuid": "89c4138b-60da-58a7-a2e4-a786cabea44b", 00:32:36.460 "is_configured": true, 00:32:36.460 "data_offset": 0, 00:32:36.460 "data_size": 65536 00:32:36.460 }, 00:32:36.460 { 00:32:36.460 "name": "BaseBdev2", 00:32:36.460 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:36.460 "is_configured": true, 00:32:36.460 "data_offset": 0, 00:32:36.460 "data_size": 65536 00:32:36.460 }, 00:32:36.460 { 00:32:36.460 "name": "BaseBdev3", 00:32:36.460 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:36.460 "is_configured": true, 00:32:36.460 "data_offset": 0, 00:32:36.460 "data_size": 65536 00:32:36.460 } 00:32:36.460 ] 00:32:36.460 }' 00:32:36.460 11:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:36.460 11:44:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:37.396 11:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:37.396 11:44:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:37.396 [2024-07-13 11:44:11.993766] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:37.396 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=131072 00:32:37.396 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.396 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:37.654 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:37.654 [2024-07-13 11:44:12.377736] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:37.914 /dev/nbd0 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:37.914 1+0 records in 00:32:37.914 1+0 records out 00:32:37.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327339 s, 12.5 MB/s 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 128 00:32:37.914 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:32:38.173 512+0 records in 00:32:38.173 512+0 records out 00:32:38.173 67108864 bytes (67 MB, 64 MiB) copied, 0.401535 s, 167 MB/s 00:32:38.173 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:38.173 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:38.173 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:32:38.173 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:38.173 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:38.173 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:38.173 11:44:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:32:38.430 [2024-07-13 11:44:13.051595] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:38.430 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:38.687 [2024-07-13 11:44:13.420948] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.687 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.946 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:38.946 "name": "raid_bdev1", 00:32:38.946 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:38.946 "strip_size_kb": 64, 00:32:38.946 "state": "online", 00:32:38.946 "raid_level": "raid5f", 00:32:38.946 "superblock": false, 00:32:38.946 "num_base_bdevs": 3, 00:32:38.946 "num_base_bdevs_discovered": 2, 00:32:38.946 "num_base_bdevs_operational": 2, 00:32:38.946 "base_bdevs_list": [ 00:32:38.946 { 00:32:38.946 "name": null, 00:32:38.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.946 "is_configured": false, 00:32:38.946 "data_offset": 0, 00:32:38.946 "data_size": 65536 00:32:38.946 }, 00:32:38.946 { 00:32:38.946 "name": "BaseBdev2", 00:32:38.946 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:38.946 "is_configured": true, 00:32:38.946 "data_offset": 0, 00:32:38.946 "data_size": 65536 00:32:38.946 }, 00:32:38.946 { 00:32:38.946 "name": "BaseBdev3", 00:32:38.946 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:38.946 "is_configured": true, 00:32:38.946 "data_offset": 0, 00:32:38.946 "data_size": 65536 00:32:38.946 } 00:32:38.946 ] 00:32:38.946 }' 00:32:38.946 11:44:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:38.946 11:44:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.881 11:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:39.881 [2024-07-13 11:44:14.633206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:40.140 [2024-07-13 11:44:14.645786] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c930 00:32:40.140 [2024-07-13 11:44:14.651702] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:40.140 11:44:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:32:41.075 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:41.075 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:41.075 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:41.075 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:41.075 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:41.075 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.075 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.334 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:41.334 "name": "raid_bdev1", 00:32:41.334 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:41.334 "strip_size_kb": 64, 00:32:41.334 "state": "online", 00:32:41.334 "raid_level": "raid5f", 00:32:41.334 "superblock": false, 00:32:41.334 "num_base_bdevs": 3, 00:32:41.334 "num_base_bdevs_discovered": 3, 00:32:41.334 "num_base_bdevs_operational": 3, 00:32:41.334 "process": { 00:32:41.334 "type": "rebuild", 00:32:41.334 "target": "spare", 00:32:41.334 "progress": { 00:32:41.334 "blocks": 22528, 00:32:41.334 "percent": 17 00:32:41.334 } 00:32:41.334 }, 00:32:41.334 "base_bdevs_list": [ 00:32:41.334 { 00:32:41.334 "name": "spare", 00:32:41.334 "uuid": "9fb34943-926c-5846-bc41-f006a7355642", 00:32:41.334 "is_configured": true, 00:32:41.334 "data_offset": 0, 00:32:41.334 "data_size": 65536 00:32:41.334 }, 00:32:41.334 { 00:32:41.334 "name": "BaseBdev2", 00:32:41.334 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:41.334 "is_configured": true, 00:32:41.334 "data_offset": 0, 00:32:41.334 "data_size": 65536 00:32:41.334 }, 00:32:41.334 { 00:32:41.334 "name": "BaseBdev3", 00:32:41.334 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:41.334 "is_configured": true, 00:32:41.334 "data_offset": 0, 00:32:41.334 "data_size": 65536 00:32:41.334 } 00:32:41.334 ] 00:32:41.334 }' 00:32:41.334 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:41.334 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:41.334 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:41.334 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:41.335 11:44:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:41.593 [2024-07-13 11:44:16.189501] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:41.594 [2024-07-13 11:44:16.266611] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:41.594 [2024-07-13 11:44:16.266805] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:41.594 [2024-07-13 11:44:16.266883] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:41.594 [2024-07-13 11:44:16.266995] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:41.594 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.853 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:41.853 "name": "raid_bdev1", 00:32:41.853 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:41.853 "strip_size_kb": 64, 00:32:41.853 "state": "online", 00:32:41.853 "raid_level": "raid5f", 00:32:41.853 "superblock": false, 00:32:41.853 "num_base_bdevs": 3, 00:32:41.853 "num_base_bdevs_discovered": 2, 00:32:41.853 "num_base_bdevs_operational": 2, 00:32:41.853 "base_bdevs_list": [ 00:32:41.853 { 00:32:41.853 "name": null, 00:32:41.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.853 "is_configured": false, 00:32:41.853 "data_offset": 0, 00:32:41.853 "data_size": 65536 00:32:41.853 }, 00:32:41.853 { 00:32:41.853 "name": "BaseBdev2", 00:32:41.853 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:41.853 "is_configured": true, 00:32:41.853 "data_offset": 0, 00:32:41.853 "data_size": 65536 00:32:41.853 }, 00:32:41.853 { 00:32:41.853 "name": "BaseBdev3", 00:32:41.853 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:41.853 "is_configured": true, 00:32:41.853 "data_offset": 0, 00:32:41.853 "data_size": 65536 00:32:41.853 } 00:32:41.853 ] 00:32:41.853 }' 00:32:41.853 11:44:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:41.853 11:44:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:42.788 "name": "raid_bdev1", 00:32:42.788 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:42.788 "strip_size_kb": 64, 00:32:42.788 "state": "online", 00:32:42.788 "raid_level": "raid5f", 00:32:42.788 "superblock": false, 00:32:42.788 "num_base_bdevs": 3, 00:32:42.788 "num_base_bdevs_discovered": 2, 00:32:42.788 "num_base_bdevs_operational": 2, 00:32:42.788 "base_bdevs_list": [ 00:32:42.788 { 00:32:42.788 "name": null, 00:32:42.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:42.788 "is_configured": false, 00:32:42.788 "data_offset": 0, 00:32:42.788 "data_size": 65536 00:32:42.788 }, 00:32:42.788 { 00:32:42.788 "name": "BaseBdev2", 00:32:42.788 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:42.788 "is_configured": true, 00:32:42.788 "data_offset": 0, 00:32:42.788 "data_size": 65536 00:32:42.788 }, 00:32:42.788 { 00:32:42.788 "name": "BaseBdev3", 00:32:42.788 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:42.788 "is_configured": true, 00:32:42.788 "data_offset": 0, 00:32:42.788 "data_size": 65536 00:32:42.788 } 00:32:42.788 ] 00:32:42.788 }' 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:42.788 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:43.045 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:43.045 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:43.302 [2024-07-13 11:44:17.812869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:43.302 [2024-07-13 11:44:17.823052] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cad0 00:32:43.302 [2024-07-13 11:44:17.828848] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:43.302 11:44:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:44.237 11:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:44.237 11:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:44.237 11:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:44.237 11:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:44.237 11:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:44.237 11:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.237 11:44:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:44.496 "name": "raid_bdev1", 00:32:44.496 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:44.496 "strip_size_kb": 64, 00:32:44.496 "state": "online", 00:32:44.496 "raid_level": "raid5f", 00:32:44.496 "superblock": false, 00:32:44.496 "num_base_bdevs": 3, 00:32:44.496 "num_base_bdevs_discovered": 3, 00:32:44.496 "num_base_bdevs_operational": 3, 00:32:44.496 "process": { 00:32:44.496 "type": "rebuild", 00:32:44.496 "target": "spare", 00:32:44.496 "progress": { 00:32:44.496 "blocks": 24576, 00:32:44.496 "percent": 18 00:32:44.496 } 00:32:44.496 }, 00:32:44.496 "base_bdevs_list": [ 00:32:44.496 { 00:32:44.496 "name": "spare", 00:32:44.496 "uuid": "9fb34943-926c-5846-bc41-f006a7355642", 00:32:44.496 "is_configured": true, 00:32:44.496 "data_offset": 0, 00:32:44.496 "data_size": 65536 00:32:44.496 }, 00:32:44.496 { 00:32:44.496 "name": "BaseBdev2", 00:32:44.496 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:44.496 "is_configured": true, 00:32:44.496 "data_offset": 0, 00:32:44.496 "data_size": 65536 00:32:44.496 }, 00:32:44.496 { 00:32:44.496 "name": "BaseBdev3", 00:32:44.496 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:44.496 "is_configured": true, 00:32:44.496 "data_offset": 0, 00:32:44.496 "data_size": 65536 00:32:44.496 } 00:32:44.496 ] 00:32:44.496 }' 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1110 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.496 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.754 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:44.754 "name": "raid_bdev1", 00:32:44.754 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:44.754 "strip_size_kb": 64, 00:32:44.754 "state": "online", 00:32:44.754 "raid_level": "raid5f", 00:32:44.754 "superblock": false, 00:32:44.754 "num_base_bdevs": 3, 00:32:44.754 "num_base_bdevs_discovered": 3, 00:32:44.754 "num_base_bdevs_operational": 3, 00:32:44.754 "process": { 00:32:44.754 "type": "rebuild", 00:32:44.754 "target": "spare", 00:32:44.754 "progress": { 00:32:44.754 "blocks": 30720, 00:32:44.754 "percent": 23 00:32:44.754 } 00:32:44.754 }, 00:32:44.754 "base_bdevs_list": [ 00:32:44.754 { 00:32:44.754 "name": "spare", 00:32:44.754 "uuid": "9fb34943-926c-5846-bc41-f006a7355642", 00:32:44.754 "is_configured": true, 00:32:44.754 "data_offset": 0, 00:32:44.754 "data_size": 65536 00:32:44.754 }, 00:32:44.754 { 00:32:44.754 "name": "BaseBdev2", 00:32:44.755 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:44.755 "is_configured": true, 00:32:44.755 "data_offset": 0, 00:32:44.755 "data_size": 65536 00:32:44.755 }, 00:32:44.755 { 00:32:44.755 "name": "BaseBdev3", 00:32:44.755 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:44.755 "is_configured": true, 00:32:44.755 "data_offset": 0, 00:32:44.755 "data_size": 65536 00:32:44.755 } 00:32:44.755 ] 00:32:44.755 }' 00:32:44.755 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:44.755 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:44.755 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:45.013 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:45.013 11:44:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:45.949 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:45.949 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:45.949 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:45.949 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:45.949 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:45.949 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:45.949 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.949 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:46.207 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:46.208 "name": "raid_bdev1", 00:32:46.208 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:46.208 "strip_size_kb": 64, 00:32:46.208 "state": "online", 00:32:46.208 "raid_level": "raid5f", 00:32:46.208 "superblock": false, 00:32:46.208 "num_base_bdevs": 3, 00:32:46.208 "num_base_bdevs_discovered": 3, 00:32:46.208 "num_base_bdevs_operational": 3, 00:32:46.208 "process": { 00:32:46.208 "type": "rebuild", 00:32:46.208 "target": "spare", 00:32:46.208 "progress": { 00:32:46.208 "blocks": 57344, 00:32:46.208 "percent": 43 00:32:46.208 } 00:32:46.208 }, 00:32:46.208 "base_bdevs_list": [ 00:32:46.208 { 00:32:46.208 "name": "spare", 00:32:46.208 "uuid": "9fb34943-926c-5846-bc41-f006a7355642", 00:32:46.208 "is_configured": true, 00:32:46.208 "data_offset": 0, 00:32:46.208 "data_size": 65536 00:32:46.208 }, 00:32:46.208 { 00:32:46.208 "name": "BaseBdev2", 00:32:46.208 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:46.208 "is_configured": true, 00:32:46.208 "data_offset": 0, 00:32:46.208 "data_size": 65536 00:32:46.208 }, 00:32:46.208 { 00:32:46.208 "name": "BaseBdev3", 00:32:46.208 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:46.208 "is_configured": true, 00:32:46.208 "data_offset": 0, 00:32:46.208 "data_size": 65536 00:32:46.208 } 00:32:46.208 ] 00:32:46.208 }' 00:32:46.208 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:46.208 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:46.208 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:46.208 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:46.208 11:44:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:47.145 11:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:47.145 11:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:47.145 11:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:47.145 11:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:47.145 11:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:47.145 11:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:47.145 11:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.145 11:44:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.404 11:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:47.404 "name": "raid_bdev1", 00:32:47.404 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:47.404 "strip_size_kb": 64, 00:32:47.404 "state": "online", 00:32:47.404 "raid_level": "raid5f", 00:32:47.404 "superblock": false, 00:32:47.404 "num_base_bdevs": 3, 00:32:47.404 "num_base_bdevs_discovered": 3, 00:32:47.404 "num_base_bdevs_operational": 3, 00:32:47.404 "process": { 00:32:47.404 "type": "rebuild", 00:32:47.404 "target": "spare", 00:32:47.404 "progress": { 00:32:47.404 "blocks": 86016, 00:32:47.404 "percent": 65 00:32:47.404 } 00:32:47.404 }, 00:32:47.404 "base_bdevs_list": [ 00:32:47.404 { 00:32:47.404 "name": "spare", 00:32:47.404 "uuid": "9fb34943-926c-5846-bc41-f006a7355642", 00:32:47.404 "is_configured": true, 00:32:47.404 "data_offset": 0, 00:32:47.404 "data_size": 65536 00:32:47.404 }, 00:32:47.404 { 00:32:47.404 "name": "BaseBdev2", 00:32:47.404 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:47.404 "is_configured": true, 00:32:47.404 "data_offset": 0, 00:32:47.404 "data_size": 65536 00:32:47.404 }, 00:32:47.404 { 00:32:47.404 "name": "BaseBdev3", 00:32:47.404 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:47.404 "is_configured": true, 00:32:47.404 "data_offset": 0, 00:32:47.404 "data_size": 65536 00:32:47.404 } 00:32:47.404 ] 00:32:47.404 }' 00:32:47.404 11:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:47.663 11:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:47.663 11:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:47.663 11:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:47.663 11:44:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:48.611 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:48.611 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:48.611 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:48.611 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:48.611 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:48.611 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:48.611 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.611 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.869 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:48.869 "name": "raid_bdev1", 00:32:48.869 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:48.869 "strip_size_kb": 64, 00:32:48.869 "state": "online", 00:32:48.869 "raid_level": "raid5f", 00:32:48.869 "superblock": false, 00:32:48.869 "num_base_bdevs": 3, 00:32:48.869 "num_base_bdevs_discovered": 3, 00:32:48.869 "num_base_bdevs_operational": 3, 00:32:48.869 "process": { 00:32:48.869 "type": "rebuild", 00:32:48.869 "target": "spare", 00:32:48.869 "progress": { 00:32:48.869 "blocks": 112640, 00:32:48.869 "percent": 85 00:32:48.869 } 00:32:48.869 }, 00:32:48.869 "base_bdevs_list": [ 00:32:48.869 { 00:32:48.869 "name": "spare", 00:32:48.869 "uuid": "9fb34943-926c-5846-bc41-f006a7355642", 00:32:48.869 "is_configured": true, 00:32:48.869 "data_offset": 0, 00:32:48.869 "data_size": 65536 00:32:48.869 }, 00:32:48.869 { 00:32:48.869 "name": "BaseBdev2", 00:32:48.869 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:48.869 "is_configured": true, 00:32:48.869 "data_offset": 0, 00:32:48.869 "data_size": 65536 00:32:48.869 }, 00:32:48.869 { 00:32:48.869 "name": "BaseBdev3", 00:32:48.869 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:48.869 "is_configured": true, 00:32:48.869 "data_offset": 0, 00:32:48.869 "data_size": 65536 00:32:48.869 } 00:32:48.869 ] 00:32:48.869 }' 00:32:48.869 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:48.869 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:48.869 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:48.869 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:48.869 11:44:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:49.805 [2024-07-13 11:44:24.286128] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:49.805 [2024-07-13 11:44:24.286457] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:49.805 [2024-07-13 11:44:24.286632] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:50.064 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:50.064 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:50.064 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:50.064 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:50.064 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:50.064 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:50.064 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.064 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.065 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:50.065 "name": "raid_bdev1", 00:32:50.065 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:50.065 "strip_size_kb": 64, 00:32:50.065 "state": "online", 00:32:50.065 "raid_level": "raid5f", 00:32:50.065 "superblock": false, 00:32:50.065 "num_base_bdevs": 3, 00:32:50.065 "num_base_bdevs_discovered": 3, 00:32:50.065 "num_base_bdevs_operational": 3, 00:32:50.065 "base_bdevs_list": [ 00:32:50.065 { 00:32:50.065 "name": "spare", 00:32:50.065 "uuid": "9fb34943-926c-5846-bc41-f006a7355642", 00:32:50.065 "is_configured": true, 00:32:50.065 "data_offset": 0, 00:32:50.065 "data_size": 65536 00:32:50.065 }, 00:32:50.065 { 00:32:50.065 "name": "BaseBdev2", 00:32:50.065 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:50.065 "is_configured": true, 00:32:50.065 "data_offset": 0, 00:32:50.065 "data_size": 65536 00:32:50.065 }, 00:32:50.065 { 00:32:50.065 "name": "BaseBdev3", 00:32:50.065 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:50.065 "is_configured": true, 00:32:50.065 "data_offset": 0, 00:32:50.065 "data_size": 65536 00:32:50.065 } 00:32:50.065 ] 00:32:50.065 }' 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.324 11:44:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.583 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:50.583 "name": "raid_bdev1", 00:32:50.583 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:50.583 "strip_size_kb": 64, 00:32:50.583 "state": "online", 00:32:50.583 "raid_level": "raid5f", 00:32:50.583 "superblock": false, 00:32:50.583 "num_base_bdevs": 3, 00:32:50.583 "num_base_bdevs_discovered": 3, 00:32:50.583 "num_base_bdevs_operational": 3, 00:32:50.583 "base_bdevs_list": [ 00:32:50.583 { 00:32:50.583 "name": "spare", 00:32:50.583 "uuid": "9fb34943-926c-5846-bc41-f006a7355642", 00:32:50.583 "is_configured": true, 00:32:50.583 "data_offset": 0, 00:32:50.583 "data_size": 65536 00:32:50.583 }, 00:32:50.583 { 00:32:50.583 "name": "BaseBdev2", 00:32:50.583 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:50.583 "is_configured": true, 00:32:50.583 "data_offset": 0, 00:32:50.583 "data_size": 65536 00:32:50.583 }, 00:32:50.583 { 00:32:50.583 "name": "BaseBdev3", 00:32:50.583 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:50.583 "is_configured": true, 00:32:50.583 "data_offset": 0, 00:32:50.583 "data_size": 65536 00:32:50.583 } 00:32:50.583 ] 00:32:50.583 }' 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.584 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.843 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:50.843 "name": "raid_bdev1", 00:32:50.843 "uuid": "2827c9b1-2f97-4cbf-a131-dc94e4367829", 00:32:50.843 "strip_size_kb": 64, 00:32:50.843 "state": "online", 00:32:50.843 "raid_level": "raid5f", 00:32:50.844 "superblock": false, 00:32:50.844 "num_base_bdevs": 3, 00:32:50.844 "num_base_bdevs_discovered": 3, 00:32:50.844 "num_base_bdevs_operational": 3, 00:32:50.844 "base_bdevs_list": [ 00:32:50.844 { 00:32:50.844 "name": "spare", 00:32:50.844 "uuid": "9fb34943-926c-5846-bc41-f006a7355642", 00:32:50.844 "is_configured": true, 00:32:50.844 "data_offset": 0, 00:32:50.844 "data_size": 65536 00:32:50.844 }, 00:32:50.844 { 00:32:50.844 "name": "BaseBdev2", 00:32:50.844 "uuid": "a3775700-47bb-57f9-8449-b095bf2f18e5", 00:32:50.844 "is_configured": true, 00:32:50.844 "data_offset": 0, 00:32:50.844 "data_size": 65536 00:32:50.844 }, 00:32:50.844 { 00:32:50.844 "name": "BaseBdev3", 00:32:50.844 "uuid": "9f7cfe3b-38ca-5433-89ce-33874bce6d14", 00:32:50.844 "is_configured": true, 00:32:50.844 "data_offset": 0, 00:32:50.844 "data_size": 65536 00:32:50.844 } 00:32:50.844 ] 00:32:50.844 }' 00:32:50.844 11:44:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:50.844 11:44:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.781 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:51.781 [2024-07-13 11:44:26.528674] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:51.781 [2024-07-13 11:44:26.528826] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:51.781 [2024-07-13 11:44:26.529005] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:51.781 [2024-07-13 11:44:26.529229] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:51.781 [2024-07-13 11:44:26.529339] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:32:52.040 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.040 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:52.298 11:44:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:52.557 /dev/nbd0 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:52.557 1+0 records in 00:32:52.557 1+0 records out 00:32:52.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320358 s, 12.8 MB/s 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:52.557 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:52.816 /dev/nbd1 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:52.816 1+0 records in 00:32:52.816 1+0 records out 00:32:52.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549976 s, 7.4 MB/s 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:32:52.816 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:52.817 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:52.817 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:32:52.817 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:52.817 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:52.817 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:52.817 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:53.076 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:53.076 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:53.076 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:53.076 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:53.076 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:53.076 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:53.076 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:32:53.335 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:32:53.335 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:53.335 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:53.335 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:53.335 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:53.335 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:53.336 11:44:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 153750 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 153750 ']' 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 153750 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 153750 00:32:53.595 killing process with pid 153750 00:32:53.595 Received shutdown signal, test time was about 60.000000 seconds 00:32:53.595 00:32:53.595 Latency(us) 00:32:53.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.595 =================================================================================================================== 00:32:53.595 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 153750' 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 153750 00:32:53.595 11:44:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 153750 00:32:53.595 [2024-07-13 11:44:28.256059] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:53.854 [2024-07-13 11:44:28.521263] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:54.790 ************************************ 00:32:54.790 END TEST raid5f_rebuild_test 00:32:54.790 ************************************ 00:32:54.790 11:44:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:32:54.790 00:32:54.790 real 0m21.808s 00:32:54.790 user 0m33.180s 00:32:54.790 sys 0m2.386s 00:32:54.790 11:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:54.790 11:44:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:55.049 11:44:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:55.049 11:44:29 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:32:55.049 11:44:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:55.049 11:44:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:55.049 11:44:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:55.049 ************************************ 00:32:55.049 START TEST raid5f_rebuild_test_sb 00:32:55.049 ************************************ 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 3 true false true 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=154418 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 154418 /var/tmp/spdk-raid.sock 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:55.049 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 154418 ']' 00:32:55.050 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:55.050 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:55.050 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:55.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:55.050 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:55.050 11:44:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.050 [2024-07-13 11:44:29.680461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:32:55.050 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:55.050 Zero copy mechanism will not be used. 00:32:55.050 [2024-07-13 11:44:29.680688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154418 ] 00:32:55.309 [2024-07-13 11:44:29.847745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:55.309 [2024-07-13 11:44:30.026675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.581 [2024-07-13 11:44:30.216548] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:55.857 11:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:55.857 11:44:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:32:55.857 11:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:55.857 11:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:56.124 BaseBdev1_malloc 00:32:56.124 11:44:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:56.382 [2024-07-13 11:44:31.086279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:56.382 [2024-07-13 11:44:31.086385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:56.382 [2024-07-13 11:44:31.086428] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:32:56.382 [2024-07-13 11:44:31.086449] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:56.382 [2024-07-13 11:44:31.088694] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:56.382 [2024-07-13 11:44:31.088743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:56.382 BaseBdev1 00:32:56.382 11:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:56.382 11:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:56.640 BaseBdev2_malloc 00:32:56.640 11:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:56.898 [2024-07-13 11:44:31.506062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:56.898 [2024-07-13 11:44:31.506156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:56.898 [2024-07-13 11:44:31.506192] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:32:56.898 [2024-07-13 11:44:31.506211] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:56.898 [2024-07-13 11:44:31.508415] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:56.898 [2024-07-13 11:44:31.508463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:56.898 BaseBdev2 00:32:56.898 11:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:56.898 11:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:57.157 BaseBdev3_malloc 00:32:57.157 11:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:57.157 [2024-07-13 11:44:31.906770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:57.157 [2024-07-13 11:44:31.906864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:57.157 [2024-07-13 11:44:31.906900] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:32:57.157 [2024-07-13 11:44:31.906926] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:57.157 [2024-07-13 11:44:31.909014] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:57.157 [2024-07-13 11:44:31.909083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:57.157 BaseBdev3 00:32:57.416 11:44:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:32:57.416 spare_malloc 00:32:57.416 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:57.674 spare_delay 00:32:57.674 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:57.934 [2024-07-13 11:44:32.555335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:57.934 [2024-07-13 11:44:32.555413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:57.934 [2024-07-13 11:44:32.555450] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:32:57.934 [2024-07-13 11:44:32.555475] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:57.934 [2024-07-13 11:44:32.557669] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:57.934 [2024-07-13 11:44:32.557722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:57.934 spare 00:32:57.934 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:32:58.192 [2024-07-13 11:44:32.739425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:58.192 [2024-07-13 11:44:32.741285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:58.192 [2024-07-13 11:44:32.741354] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:58.192 [2024-07-13 11:44:32.741561] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:32:58.192 [2024-07-13 11:44:32.741584] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:58.192 [2024-07-13 11:44:32.741700] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:32:58.192 [2024-07-13 11:44:32.745914] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:32:58.192 [2024-07-13 11:44:32.745938] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:32:58.192 [2024-07-13 11:44:32.746091] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:58.192 "name": "raid_bdev1", 00:32:58.192 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:32:58.192 "strip_size_kb": 64, 00:32:58.192 "state": "online", 00:32:58.192 "raid_level": "raid5f", 00:32:58.192 "superblock": true, 00:32:58.192 "num_base_bdevs": 3, 00:32:58.192 "num_base_bdevs_discovered": 3, 00:32:58.192 "num_base_bdevs_operational": 3, 00:32:58.192 "base_bdevs_list": [ 00:32:58.192 { 00:32:58.192 "name": "BaseBdev1", 00:32:58.192 "uuid": "123cf5af-8854-5123-b326-330d6d74bdef", 00:32:58.192 "is_configured": true, 00:32:58.192 "data_offset": 2048, 00:32:58.192 "data_size": 63488 00:32:58.192 }, 00:32:58.192 { 00:32:58.192 "name": "BaseBdev2", 00:32:58.192 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:32:58.192 "is_configured": true, 00:32:58.192 "data_offset": 2048, 00:32:58.192 "data_size": 63488 00:32:58.192 }, 00:32:58.192 { 00:32:58.192 "name": "BaseBdev3", 00:32:58.192 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:32:58.192 "is_configured": true, 00:32:58.192 "data_offset": 2048, 00:32:58.192 "data_size": 63488 00:32:58.192 } 00:32:58.192 ] 00:32:58.192 }' 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:58.192 11:44:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:59.139 11:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:59.139 11:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:59.139 [2024-07-13 11:44:33.867311] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:59.139 11:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=126976 00:32:59.139 11:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.139 11:44:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:59.398 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:59.656 [2024-07-13 11:44:34.359304] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:59.656 /dev/nbd0 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:59.656 1+0 records in 00:32:59.656 1+0 records out 00:32:59.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302329 s, 13.5 MB/s 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:32:59.656 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:59.915 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:59.915 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:32:59.915 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:59.915 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:59.915 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:32:59.915 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:32:59.915 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 128 00:32:59.915 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:33:00.174 496+0 records in 00:33:00.174 496+0 records out 00:33:00.174 65011712 bytes (65 MB, 62 MiB) copied, 0.389311 s, 167 MB/s 00:33:00.174 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:00.174 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:00.174 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:00.174 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:00.174 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:00.174 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:00.174 11:44:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:00.433 [2024-07-13 11:44:35.018178] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:00.433 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:00.692 [2024-07-13 11:44:35.371381] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.692 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.951 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:00.951 "name": "raid_bdev1", 00:33:00.951 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:00.951 "strip_size_kb": 64, 00:33:00.951 "state": "online", 00:33:00.951 "raid_level": "raid5f", 00:33:00.951 "superblock": true, 00:33:00.951 "num_base_bdevs": 3, 00:33:00.951 "num_base_bdevs_discovered": 2, 00:33:00.951 "num_base_bdevs_operational": 2, 00:33:00.951 "base_bdevs_list": [ 00:33:00.951 { 00:33:00.951 "name": null, 00:33:00.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.951 "is_configured": false, 00:33:00.951 "data_offset": 2048, 00:33:00.951 "data_size": 63488 00:33:00.951 }, 00:33:00.951 { 00:33:00.951 "name": "BaseBdev2", 00:33:00.951 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:00.951 "is_configured": true, 00:33:00.951 "data_offset": 2048, 00:33:00.951 "data_size": 63488 00:33:00.951 }, 00:33:00.951 { 00:33:00.951 "name": "BaseBdev3", 00:33:00.951 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:00.951 "is_configured": true, 00:33:00.951 "data_offset": 2048, 00:33:00.951 "data_size": 63488 00:33:00.951 } 00:33:00.951 ] 00:33:00.951 }' 00:33:00.951 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:00.951 11:44:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:01.887 11:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:01.887 [2024-07-13 11:44:36.579628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:01.887 [2024-07-13 11:44:36.590842] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a980 00:33:01.887 [2024-07-13 11:44:36.596579] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:01.887 11:44:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:03.263 "name": "raid_bdev1", 00:33:03.263 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:03.263 "strip_size_kb": 64, 00:33:03.263 "state": "online", 00:33:03.263 "raid_level": "raid5f", 00:33:03.263 "superblock": true, 00:33:03.263 "num_base_bdevs": 3, 00:33:03.263 "num_base_bdevs_discovered": 3, 00:33:03.263 "num_base_bdevs_operational": 3, 00:33:03.263 "process": { 00:33:03.263 "type": "rebuild", 00:33:03.263 "target": "spare", 00:33:03.263 "progress": { 00:33:03.263 "blocks": 24576, 00:33:03.263 "percent": 19 00:33:03.263 } 00:33:03.263 }, 00:33:03.263 "base_bdevs_list": [ 00:33:03.263 { 00:33:03.263 "name": "spare", 00:33:03.263 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:03.263 "is_configured": true, 00:33:03.263 "data_offset": 2048, 00:33:03.263 "data_size": 63488 00:33:03.263 }, 00:33:03.263 { 00:33:03.263 "name": "BaseBdev2", 00:33:03.263 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:03.263 "is_configured": true, 00:33:03.263 "data_offset": 2048, 00:33:03.263 "data_size": 63488 00:33:03.263 }, 00:33:03.263 { 00:33:03.263 "name": "BaseBdev3", 00:33:03.263 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:03.263 "is_configured": true, 00:33:03.263 "data_offset": 2048, 00:33:03.263 "data_size": 63488 00:33:03.263 } 00:33:03.263 ] 00:33:03.263 }' 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:03.263 11:44:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:03.522 [2024-07-13 11:44:38.210129] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:03.522 [2024-07-13 11:44:38.211219] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:03.522 [2024-07-13 11:44:38.211287] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:03.522 [2024-07-13 11:44:38.211306] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:03.522 [2024-07-13 11:44:38.211314] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.522 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.780 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:03.780 "name": "raid_bdev1", 00:33:03.780 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:03.780 "strip_size_kb": 64, 00:33:03.780 "state": "online", 00:33:03.780 "raid_level": "raid5f", 00:33:03.780 "superblock": true, 00:33:03.780 "num_base_bdevs": 3, 00:33:03.780 "num_base_bdevs_discovered": 2, 00:33:03.780 "num_base_bdevs_operational": 2, 00:33:03.780 "base_bdevs_list": [ 00:33:03.780 { 00:33:03.780 "name": null, 00:33:03.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.780 "is_configured": false, 00:33:03.780 "data_offset": 2048, 00:33:03.780 "data_size": 63488 00:33:03.780 }, 00:33:03.780 { 00:33:03.780 "name": "BaseBdev2", 00:33:03.780 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:03.781 "is_configured": true, 00:33:03.781 "data_offset": 2048, 00:33:03.781 "data_size": 63488 00:33:03.781 }, 00:33:03.781 { 00:33:03.781 "name": "BaseBdev3", 00:33:03.781 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:03.781 "is_configured": true, 00:33:03.781 "data_offset": 2048, 00:33:03.781 "data_size": 63488 00:33:03.781 } 00:33:03.781 ] 00:33:03.781 }' 00:33:03.781 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:03.781 11:44:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:04.348 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:04.348 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:04.348 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:04.348 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:04.348 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:04.348 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.348 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.606 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:04.606 "name": "raid_bdev1", 00:33:04.606 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:04.606 "strip_size_kb": 64, 00:33:04.606 "state": "online", 00:33:04.606 "raid_level": "raid5f", 00:33:04.606 "superblock": true, 00:33:04.606 "num_base_bdevs": 3, 00:33:04.606 "num_base_bdevs_discovered": 2, 00:33:04.606 "num_base_bdevs_operational": 2, 00:33:04.606 "base_bdevs_list": [ 00:33:04.606 { 00:33:04.606 "name": null, 00:33:04.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.606 "is_configured": false, 00:33:04.606 "data_offset": 2048, 00:33:04.606 "data_size": 63488 00:33:04.606 }, 00:33:04.606 { 00:33:04.606 "name": "BaseBdev2", 00:33:04.606 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:04.606 "is_configured": true, 00:33:04.606 "data_offset": 2048, 00:33:04.606 "data_size": 63488 00:33:04.606 }, 00:33:04.606 { 00:33:04.606 "name": "BaseBdev3", 00:33:04.606 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:04.606 "is_configured": true, 00:33:04.606 "data_offset": 2048, 00:33:04.606 "data_size": 63488 00:33:04.606 } 00:33:04.606 ] 00:33:04.606 }' 00:33:04.606 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:04.865 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:04.865 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:04.865 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:04.865 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:05.123 [2024-07-13 11:44:39.679051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:05.123 [2024-07-13 11:44:39.689016] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:33:05.123 [2024-07-13 11:44:39.694385] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:05.123 11:44:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:06.058 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:06.058 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:06.058 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:06.058 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:06.058 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:06.058 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.058 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.316 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:06.316 "name": "raid_bdev1", 00:33:06.316 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:06.316 "strip_size_kb": 64, 00:33:06.316 "state": "online", 00:33:06.316 "raid_level": "raid5f", 00:33:06.316 "superblock": true, 00:33:06.316 "num_base_bdevs": 3, 00:33:06.316 "num_base_bdevs_discovered": 3, 00:33:06.316 "num_base_bdevs_operational": 3, 00:33:06.316 "process": { 00:33:06.316 "type": "rebuild", 00:33:06.316 "target": "spare", 00:33:06.316 "progress": { 00:33:06.316 "blocks": 24576, 00:33:06.316 "percent": 19 00:33:06.316 } 00:33:06.316 }, 00:33:06.316 "base_bdevs_list": [ 00:33:06.316 { 00:33:06.316 "name": "spare", 00:33:06.316 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:06.316 "is_configured": true, 00:33:06.316 "data_offset": 2048, 00:33:06.316 "data_size": 63488 00:33:06.316 }, 00:33:06.316 { 00:33:06.316 "name": "BaseBdev2", 00:33:06.316 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:06.316 "is_configured": true, 00:33:06.316 "data_offset": 2048, 00:33:06.316 "data_size": 63488 00:33:06.316 }, 00:33:06.316 { 00:33:06.316 "name": "BaseBdev3", 00:33:06.316 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:06.316 "is_configured": true, 00:33:06.316 "data_offset": 2048, 00:33:06.316 "data_size": 63488 00:33:06.316 } 00:33:06.316 ] 00:33:06.316 }' 00:33:06.316 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:06.316 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:06.316 11:44:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:06.316 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:06.316 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:33:06.316 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:33:06.316 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1132 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.317 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.574 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:06.574 "name": "raid_bdev1", 00:33:06.574 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:06.574 "strip_size_kb": 64, 00:33:06.574 "state": "online", 00:33:06.574 "raid_level": "raid5f", 00:33:06.574 "superblock": true, 00:33:06.574 "num_base_bdevs": 3, 00:33:06.574 "num_base_bdevs_discovered": 3, 00:33:06.574 "num_base_bdevs_operational": 3, 00:33:06.574 "process": { 00:33:06.574 "type": "rebuild", 00:33:06.574 "target": "spare", 00:33:06.574 "progress": { 00:33:06.574 "blocks": 30720, 00:33:06.574 "percent": 24 00:33:06.574 } 00:33:06.574 }, 00:33:06.574 "base_bdevs_list": [ 00:33:06.574 { 00:33:06.574 "name": "spare", 00:33:06.574 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:06.574 "is_configured": true, 00:33:06.574 "data_offset": 2048, 00:33:06.574 "data_size": 63488 00:33:06.574 }, 00:33:06.574 { 00:33:06.574 "name": "BaseBdev2", 00:33:06.574 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:06.574 "is_configured": true, 00:33:06.574 "data_offset": 2048, 00:33:06.574 "data_size": 63488 00:33:06.574 }, 00:33:06.574 { 00:33:06.574 "name": "BaseBdev3", 00:33:06.574 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:06.574 "is_configured": true, 00:33:06.574 "data_offset": 2048, 00:33:06.574 "data_size": 63488 00:33:06.574 } 00:33:06.574 ] 00:33:06.574 }' 00:33:06.574 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:06.831 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:06.832 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:06.832 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:06.832 11:44:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:07.767 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:07.767 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:07.767 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:07.767 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:07.767 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:07.767 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:07.767 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.767 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:08.025 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:08.025 "name": "raid_bdev1", 00:33:08.025 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:08.025 "strip_size_kb": 64, 00:33:08.025 "state": "online", 00:33:08.025 "raid_level": "raid5f", 00:33:08.025 "superblock": true, 00:33:08.025 "num_base_bdevs": 3, 00:33:08.025 "num_base_bdevs_discovered": 3, 00:33:08.025 "num_base_bdevs_operational": 3, 00:33:08.025 "process": { 00:33:08.025 "type": "rebuild", 00:33:08.025 "target": "spare", 00:33:08.025 "progress": { 00:33:08.025 "blocks": 59392, 00:33:08.025 "percent": 46 00:33:08.025 } 00:33:08.025 }, 00:33:08.025 "base_bdevs_list": [ 00:33:08.025 { 00:33:08.025 "name": "spare", 00:33:08.025 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:08.025 "is_configured": true, 00:33:08.025 "data_offset": 2048, 00:33:08.025 "data_size": 63488 00:33:08.025 }, 00:33:08.025 { 00:33:08.025 "name": "BaseBdev2", 00:33:08.025 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:08.025 "is_configured": true, 00:33:08.025 "data_offset": 2048, 00:33:08.025 "data_size": 63488 00:33:08.025 }, 00:33:08.025 { 00:33:08.025 "name": "BaseBdev3", 00:33:08.025 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:08.025 "is_configured": true, 00:33:08.025 "data_offset": 2048, 00:33:08.025 "data_size": 63488 00:33:08.025 } 00:33:08.025 ] 00:33:08.025 }' 00:33:08.025 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:08.025 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:08.025 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:08.025 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:08.025 11:44:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:09.398 11:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:09.398 11:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:09.398 11:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:09.398 11:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:09.398 11:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:09.399 11:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:09.399 11:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.399 11:44:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.399 11:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:09.399 "name": "raid_bdev1", 00:33:09.399 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:09.399 "strip_size_kb": 64, 00:33:09.399 "state": "online", 00:33:09.399 "raid_level": "raid5f", 00:33:09.399 "superblock": true, 00:33:09.399 "num_base_bdevs": 3, 00:33:09.399 "num_base_bdevs_discovered": 3, 00:33:09.399 "num_base_bdevs_operational": 3, 00:33:09.399 "process": { 00:33:09.399 "type": "rebuild", 00:33:09.399 "target": "spare", 00:33:09.399 "progress": { 00:33:09.399 "blocks": 86016, 00:33:09.399 "percent": 67 00:33:09.399 } 00:33:09.399 }, 00:33:09.399 "base_bdevs_list": [ 00:33:09.399 { 00:33:09.399 "name": "spare", 00:33:09.399 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:09.399 "is_configured": true, 00:33:09.399 "data_offset": 2048, 00:33:09.399 "data_size": 63488 00:33:09.399 }, 00:33:09.399 { 00:33:09.399 "name": "BaseBdev2", 00:33:09.399 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:09.399 "is_configured": true, 00:33:09.399 "data_offset": 2048, 00:33:09.399 "data_size": 63488 00:33:09.399 }, 00:33:09.399 { 00:33:09.399 "name": "BaseBdev3", 00:33:09.399 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:09.399 "is_configured": true, 00:33:09.399 "data_offset": 2048, 00:33:09.399 "data_size": 63488 00:33:09.399 } 00:33:09.399 ] 00:33:09.399 }' 00:33:09.399 11:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:09.399 11:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:09.399 11:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:09.399 11:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:09.399 11:44:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:10.774 "name": "raid_bdev1", 00:33:10.774 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:10.774 "strip_size_kb": 64, 00:33:10.774 "state": "online", 00:33:10.774 "raid_level": "raid5f", 00:33:10.774 "superblock": true, 00:33:10.774 "num_base_bdevs": 3, 00:33:10.774 "num_base_bdevs_discovered": 3, 00:33:10.774 "num_base_bdevs_operational": 3, 00:33:10.774 "process": { 00:33:10.774 "type": "rebuild", 00:33:10.774 "target": "spare", 00:33:10.774 "progress": { 00:33:10.774 "blocks": 114688, 00:33:10.774 "percent": 90 00:33:10.774 } 00:33:10.774 }, 00:33:10.774 "base_bdevs_list": [ 00:33:10.774 { 00:33:10.774 "name": "spare", 00:33:10.774 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:10.774 "is_configured": true, 00:33:10.774 "data_offset": 2048, 00:33:10.774 "data_size": 63488 00:33:10.774 }, 00:33:10.774 { 00:33:10.774 "name": "BaseBdev2", 00:33:10.774 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:10.774 "is_configured": true, 00:33:10.774 "data_offset": 2048, 00:33:10.774 "data_size": 63488 00:33:10.774 }, 00:33:10.774 { 00:33:10.774 "name": "BaseBdev3", 00:33:10.774 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:10.774 "is_configured": true, 00:33:10.774 "data_offset": 2048, 00:33:10.774 "data_size": 63488 00:33:10.774 } 00:33:10.774 ] 00:33:10.774 }' 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:10.774 11:44:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:11.341 [2024-07-13 11:44:45.949535] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:11.341 [2024-07-13 11:44:45.949609] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:11.341 [2024-07-13 11:44:45.949750] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:11.907 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:11.907 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:11.907 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:11.907 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:11.907 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:11.907 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:11.907 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.907 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:12.166 "name": "raid_bdev1", 00:33:12.166 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:12.166 "strip_size_kb": 64, 00:33:12.166 "state": "online", 00:33:12.166 "raid_level": "raid5f", 00:33:12.166 "superblock": true, 00:33:12.166 "num_base_bdevs": 3, 00:33:12.166 "num_base_bdevs_discovered": 3, 00:33:12.166 "num_base_bdevs_operational": 3, 00:33:12.166 "base_bdevs_list": [ 00:33:12.166 { 00:33:12.166 "name": "spare", 00:33:12.166 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:12.166 "is_configured": true, 00:33:12.166 "data_offset": 2048, 00:33:12.166 "data_size": 63488 00:33:12.166 }, 00:33:12.166 { 00:33:12.166 "name": "BaseBdev2", 00:33:12.166 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:12.166 "is_configured": true, 00:33:12.166 "data_offset": 2048, 00:33:12.166 "data_size": 63488 00:33:12.166 }, 00:33:12.166 { 00:33:12.166 "name": "BaseBdev3", 00:33:12.166 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:12.166 "is_configured": true, 00:33:12.166 "data_offset": 2048, 00:33:12.166 "data_size": 63488 00:33:12.166 } 00:33:12.166 ] 00:33:12.166 }' 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.166 11:44:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.425 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:12.425 "name": "raid_bdev1", 00:33:12.425 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:12.425 "strip_size_kb": 64, 00:33:12.425 "state": "online", 00:33:12.425 "raid_level": "raid5f", 00:33:12.425 "superblock": true, 00:33:12.425 "num_base_bdevs": 3, 00:33:12.425 "num_base_bdevs_discovered": 3, 00:33:12.425 "num_base_bdevs_operational": 3, 00:33:12.425 "base_bdevs_list": [ 00:33:12.425 { 00:33:12.425 "name": "spare", 00:33:12.425 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:12.425 "is_configured": true, 00:33:12.425 "data_offset": 2048, 00:33:12.425 "data_size": 63488 00:33:12.425 }, 00:33:12.425 { 00:33:12.425 "name": "BaseBdev2", 00:33:12.425 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:12.425 "is_configured": true, 00:33:12.425 "data_offset": 2048, 00:33:12.425 "data_size": 63488 00:33:12.425 }, 00:33:12.425 { 00:33:12.425 "name": "BaseBdev3", 00:33:12.425 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:12.425 "is_configured": true, 00:33:12.425 "data_offset": 2048, 00:33:12.425 "data_size": 63488 00:33:12.425 } 00:33:12.425 ] 00:33:12.425 }' 00:33:12.425 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:12.683 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:12.683 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:12.683 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:12.683 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:12.683 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:12.683 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:12.683 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:12.683 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:12.684 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:12.684 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:12.684 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:12.684 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:12.684 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:12.684 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.684 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.942 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:12.942 "name": "raid_bdev1", 00:33:12.942 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:12.942 "strip_size_kb": 64, 00:33:12.942 "state": "online", 00:33:12.942 "raid_level": "raid5f", 00:33:12.942 "superblock": true, 00:33:12.942 "num_base_bdevs": 3, 00:33:12.942 "num_base_bdevs_discovered": 3, 00:33:12.942 "num_base_bdevs_operational": 3, 00:33:12.942 "base_bdevs_list": [ 00:33:12.942 { 00:33:12.942 "name": "spare", 00:33:12.942 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:12.942 "is_configured": true, 00:33:12.942 "data_offset": 2048, 00:33:12.942 "data_size": 63488 00:33:12.942 }, 00:33:12.942 { 00:33:12.942 "name": "BaseBdev2", 00:33:12.942 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:12.942 "is_configured": true, 00:33:12.942 "data_offset": 2048, 00:33:12.942 "data_size": 63488 00:33:12.942 }, 00:33:12.942 { 00:33:12.942 "name": "BaseBdev3", 00:33:12.942 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:12.942 "is_configured": true, 00:33:12.942 "data_offset": 2048, 00:33:12.942 "data_size": 63488 00:33:12.942 } 00:33:12.942 ] 00:33:12.942 }' 00:33:12.942 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:12.942 11:44:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.510 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:13.769 [2024-07-13 11:44:48.407623] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:13.769 [2024-07-13 11:44:48.407651] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:13.769 [2024-07-13 11:44:48.407744] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:13.769 [2024-07-13 11:44:48.407822] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:13.769 [2024-07-13 11:44:48.407833] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:33:13.769 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.769 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:14.028 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:14.287 /dev/nbd0 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:14.287 1+0 records in 00:33:14.287 1+0 records out 00:33:14.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284256 s, 14.4 MB/s 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:14.287 11:44:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:33:14.546 /dev/nbd1 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:14.546 1+0 records in 00:33:14.546 1+0 records out 00:33:14.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003523 s, 11.6 MB/s 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:14.546 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:14.805 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:33:14.805 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:14.805 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:33:14.805 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:14.805 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:14.805 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:14.805 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:15.064 11:44:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:15.323 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:15.323 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:15.323 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:15.323 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:15.323 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:15.323 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:15.323 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:33:15.581 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:33:15.581 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:15.581 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:15.581 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:15.581 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:15.581 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:33:15.582 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:15.840 [2024-07-13 11:44:50.576697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:15.840 [2024-07-13 11:44:50.576781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:15.840 [2024-07-13 11:44:50.576837] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:15.840 [2024-07-13 11:44:50.576868] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:15.840 [2024-07-13 11:44:50.579139] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:15.840 [2024-07-13 11:44:50.579188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:15.840 [2024-07-13 11:44:50.579300] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:15.840 [2024-07-13 11:44:50.579366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:15.840 [2024-07-13 11:44:50.579516] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:15.840 [2024-07-13 11:44:50.579627] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:15.840 spare 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:15.840 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.099 [2024-07-13 11:44:50.679713] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b480 00:33:16.099 [2024-07-13 11:44:50.679733] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:33:16.099 [2024-07-13 11:44:50.679844] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004aa30 00:33:16.099 [2024-07-13 11:44:50.683879] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b480 00:33:16.099 [2024-07-13 11:44:50.683902] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b480 00:33:16.099 [2024-07-13 11:44:50.684048] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.099 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:16.099 "name": "raid_bdev1", 00:33:16.099 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:16.099 "strip_size_kb": 64, 00:33:16.099 "state": "online", 00:33:16.099 "raid_level": "raid5f", 00:33:16.099 "superblock": true, 00:33:16.099 "num_base_bdevs": 3, 00:33:16.099 "num_base_bdevs_discovered": 3, 00:33:16.099 "num_base_bdevs_operational": 3, 00:33:16.099 "base_bdevs_list": [ 00:33:16.099 { 00:33:16.099 "name": "spare", 00:33:16.099 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:16.099 "is_configured": true, 00:33:16.099 "data_offset": 2048, 00:33:16.099 "data_size": 63488 00:33:16.099 }, 00:33:16.099 { 00:33:16.099 "name": "BaseBdev2", 00:33:16.099 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:16.099 "is_configured": true, 00:33:16.099 "data_offset": 2048, 00:33:16.099 "data_size": 63488 00:33:16.099 }, 00:33:16.099 { 00:33:16.099 "name": "BaseBdev3", 00:33:16.099 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:16.099 "is_configured": true, 00:33:16.099 "data_offset": 2048, 00:33:16.099 "data_size": 63488 00:33:16.099 } 00:33:16.099 ] 00:33:16.099 }' 00:33:16.099 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:16.099 11:44:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:17.036 "name": "raid_bdev1", 00:33:17.036 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:17.036 "strip_size_kb": 64, 00:33:17.036 "state": "online", 00:33:17.036 "raid_level": "raid5f", 00:33:17.036 "superblock": true, 00:33:17.036 "num_base_bdevs": 3, 00:33:17.036 "num_base_bdevs_discovered": 3, 00:33:17.036 "num_base_bdevs_operational": 3, 00:33:17.036 "base_bdevs_list": [ 00:33:17.036 { 00:33:17.036 "name": "spare", 00:33:17.036 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:17.036 "is_configured": true, 00:33:17.036 "data_offset": 2048, 00:33:17.036 "data_size": 63488 00:33:17.036 }, 00:33:17.036 { 00:33:17.036 "name": "BaseBdev2", 00:33:17.036 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:17.036 "is_configured": true, 00:33:17.036 "data_offset": 2048, 00:33:17.036 "data_size": 63488 00:33:17.036 }, 00:33:17.036 { 00:33:17.036 "name": "BaseBdev3", 00:33:17.036 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:17.036 "is_configured": true, 00:33:17.036 "data_offset": 2048, 00:33:17.036 "data_size": 63488 00:33:17.036 } 00:33:17.036 ] 00:33:17.036 }' 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.036 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:17.295 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:33:17.295 11:44:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:17.555 [2024-07-13 11:44:52.149252] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.555 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.814 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:17.814 "name": "raid_bdev1", 00:33:17.814 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:17.814 "strip_size_kb": 64, 00:33:17.814 "state": "online", 00:33:17.814 "raid_level": "raid5f", 00:33:17.814 "superblock": true, 00:33:17.814 "num_base_bdevs": 3, 00:33:17.814 "num_base_bdevs_discovered": 2, 00:33:17.814 "num_base_bdevs_operational": 2, 00:33:17.814 "base_bdevs_list": [ 00:33:17.814 { 00:33:17.814 "name": null, 00:33:17.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.814 "is_configured": false, 00:33:17.814 "data_offset": 2048, 00:33:17.814 "data_size": 63488 00:33:17.814 }, 00:33:17.814 { 00:33:17.814 "name": "BaseBdev2", 00:33:17.814 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:17.814 "is_configured": true, 00:33:17.814 "data_offset": 2048, 00:33:17.814 "data_size": 63488 00:33:17.814 }, 00:33:17.814 { 00:33:17.814 "name": "BaseBdev3", 00:33:17.814 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:17.814 "is_configured": true, 00:33:17.814 "data_offset": 2048, 00:33:17.814 "data_size": 63488 00:33:17.814 } 00:33:17.814 ] 00:33:17.814 }' 00:33:17.814 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:17.814 11:44:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.378 11:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:18.636 [2024-07-13 11:44:53.333526] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:18.636 [2024-07-13 11:44:53.333660] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:18.636 [2024-07-13 11:44:53.333676] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:18.636 [2024-07-13 11:44:53.333741] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:18.636 [2024-07-13 11:44:53.344414] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004abd0 00:33:18.636 [2024-07-13 11:44:53.350106] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:18.636 11:44:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:20.013 "name": "raid_bdev1", 00:33:20.013 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:20.013 "strip_size_kb": 64, 00:33:20.013 "state": "online", 00:33:20.013 "raid_level": "raid5f", 00:33:20.013 "superblock": true, 00:33:20.013 "num_base_bdevs": 3, 00:33:20.013 "num_base_bdevs_discovered": 3, 00:33:20.013 "num_base_bdevs_operational": 3, 00:33:20.013 "process": { 00:33:20.013 "type": "rebuild", 00:33:20.013 "target": "spare", 00:33:20.013 "progress": { 00:33:20.013 "blocks": 22528, 00:33:20.013 "percent": 17 00:33:20.013 } 00:33:20.013 }, 00:33:20.013 "base_bdevs_list": [ 00:33:20.013 { 00:33:20.013 "name": "spare", 00:33:20.013 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:20.013 "is_configured": true, 00:33:20.013 "data_offset": 2048, 00:33:20.013 "data_size": 63488 00:33:20.013 }, 00:33:20.013 { 00:33:20.013 "name": "BaseBdev2", 00:33:20.013 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:20.013 "is_configured": true, 00:33:20.013 "data_offset": 2048, 00:33:20.013 "data_size": 63488 00:33:20.013 }, 00:33:20.013 { 00:33:20.013 "name": "BaseBdev3", 00:33:20.013 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:20.013 "is_configured": true, 00:33:20.013 "data_offset": 2048, 00:33:20.013 "data_size": 63488 00:33:20.013 } 00:33:20.013 ] 00:33:20.013 }' 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.013 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:20.271 [2024-07-13 11:44:54.899498] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:20.271 [2024-07-13 11:44:54.964557] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:20.271 [2024-07-13 11:44:54.964627] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:20.271 [2024-07-13 11:44:54.964647] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:20.271 [2024-07-13 11:44:54.964655] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.271 11:44:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.529 11:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:20.529 "name": "raid_bdev1", 00:33:20.529 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:20.529 "strip_size_kb": 64, 00:33:20.529 "state": "online", 00:33:20.529 "raid_level": "raid5f", 00:33:20.529 "superblock": true, 00:33:20.529 "num_base_bdevs": 3, 00:33:20.529 "num_base_bdevs_discovered": 2, 00:33:20.529 "num_base_bdevs_operational": 2, 00:33:20.529 "base_bdevs_list": [ 00:33:20.529 { 00:33:20.529 "name": null, 00:33:20.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.529 "is_configured": false, 00:33:20.529 "data_offset": 2048, 00:33:20.529 "data_size": 63488 00:33:20.529 }, 00:33:20.529 { 00:33:20.529 "name": "BaseBdev2", 00:33:20.529 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:20.529 "is_configured": true, 00:33:20.529 "data_offset": 2048, 00:33:20.529 "data_size": 63488 00:33:20.529 }, 00:33:20.529 { 00:33:20.530 "name": "BaseBdev3", 00:33:20.530 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:20.530 "is_configured": true, 00:33:20.530 "data_offset": 2048, 00:33:20.530 "data_size": 63488 00:33:20.530 } 00:33:20.530 ] 00:33:20.530 }' 00:33:20.530 11:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:20.530 11:44:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:21.466 11:44:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:21.466 [2024-07-13 11:44:56.130116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:21.466 [2024-07-13 11:44:56.130200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:21.466 [2024-07-13 11:44:56.130236] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:21.466 [2024-07-13 11:44:56.130270] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:21.466 [2024-07-13 11:44:56.130795] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:21.466 [2024-07-13 11:44:56.130833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:21.466 [2024-07-13 11:44:56.130963] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:21.466 [2024-07-13 11:44:56.130981] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:21.466 [2024-07-13 11:44:56.130990] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:21.466 [2024-07-13 11:44:56.131036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:21.466 [2024-07-13 11:44:56.140594] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004af10 00:33:21.466 spare 00:33:21.466 [2024-07-13 11:44:56.146228] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:21.466 11:44:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:33:22.403 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:22.403 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:22.403 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:22.403 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:22.403 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:22.403 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.403 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.662 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:22.662 "name": "raid_bdev1", 00:33:22.662 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:22.662 "strip_size_kb": 64, 00:33:22.662 "state": "online", 00:33:22.662 "raid_level": "raid5f", 00:33:22.663 "superblock": true, 00:33:22.663 "num_base_bdevs": 3, 00:33:22.663 "num_base_bdevs_discovered": 3, 00:33:22.663 "num_base_bdevs_operational": 3, 00:33:22.663 "process": { 00:33:22.663 "type": "rebuild", 00:33:22.663 "target": "spare", 00:33:22.663 "progress": { 00:33:22.663 "blocks": 24576, 00:33:22.663 "percent": 19 00:33:22.663 } 00:33:22.663 }, 00:33:22.663 "base_bdevs_list": [ 00:33:22.663 { 00:33:22.663 "name": "spare", 00:33:22.663 "uuid": "b5639b53-8ffc-5472-9cea-5310e472079c", 00:33:22.663 "is_configured": true, 00:33:22.663 "data_offset": 2048, 00:33:22.663 "data_size": 63488 00:33:22.663 }, 00:33:22.663 { 00:33:22.663 "name": "BaseBdev2", 00:33:22.663 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:22.663 "is_configured": true, 00:33:22.663 "data_offset": 2048, 00:33:22.663 "data_size": 63488 00:33:22.663 }, 00:33:22.663 { 00:33:22.663 "name": "BaseBdev3", 00:33:22.663 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:22.663 "is_configured": true, 00:33:22.663 "data_offset": 2048, 00:33:22.663 "data_size": 63488 00:33:22.663 } 00:33:22.663 ] 00:33:22.663 }' 00:33:22.663 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:22.921 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:22.921 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:22.921 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:22.921 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:23.181 [2024-07-13 11:44:57.743586] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:23.181 [2024-07-13 11:44:57.760602] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:23.181 [2024-07-13 11:44:57.760668] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:23.181 [2024-07-13 11:44:57.760687] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:23.181 [2024-07-13 11:44:57.760695] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.181 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.451 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:23.451 "name": "raid_bdev1", 00:33:23.451 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:23.451 "strip_size_kb": 64, 00:33:23.451 "state": "online", 00:33:23.451 "raid_level": "raid5f", 00:33:23.451 "superblock": true, 00:33:23.451 "num_base_bdevs": 3, 00:33:23.451 "num_base_bdevs_discovered": 2, 00:33:23.451 "num_base_bdevs_operational": 2, 00:33:23.451 "base_bdevs_list": [ 00:33:23.451 { 00:33:23.451 "name": null, 00:33:23.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.451 "is_configured": false, 00:33:23.451 "data_offset": 2048, 00:33:23.451 "data_size": 63488 00:33:23.451 }, 00:33:23.451 { 00:33:23.451 "name": "BaseBdev2", 00:33:23.451 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:23.451 "is_configured": true, 00:33:23.451 "data_offset": 2048, 00:33:23.451 "data_size": 63488 00:33:23.451 }, 00:33:23.451 { 00:33:23.451 "name": "BaseBdev3", 00:33:23.451 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:23.451 "is_configured": true, 00:33:23.451 "data_offset": 2048, 00:33:23.451 "data_size": 63488 00:33:23.451 } 00:33:23.451 ] 00:33:23.451 }' 00:33:23.451 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:23.451 11:44:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:24.057 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:24.057 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:24.057 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:24.057 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:24.057 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:24.057 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.057 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.315 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:24.315 "name": "raid_bdev1", 00:33:24.315 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:24.315 "strip_size_kb": 64, 00:33:24.315 "state": "online", 00:33:24.315 "raid_level": "raid5f", 00:33:24.315 "superblock": true, 00:33:24.315 "num_base_bdevs": 3, 00:33:24.315 "num_base_bdevs_discovered": 2, 00:33:24.315 "num_base_bdevs_operational": 2, 00:33:24.315 "base_bdevs_list": [ 00:33:24.315 { 00:33:24.315 "name": null, 00:33:24.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:24.315 "is_configured": false, 00:33:24.315 "data_offset": 2048, 00:33:24.315 "data_size": 63488 00:33:24.315 }, 00:33:24.315 { 00:33:24.315 "name": "BaseBdev2", 00:33:24.315 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:24.315 "is_configured": true, 00:33:24.315 "data_offset": 2048, 00:33:24.315 "data_size": 63488 00:33:24.315 }, 00:33:24.315 { 00:33:24.315 "name": "BaseBdev3", 00:33:24.315 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:24.315 "is_configured": true, 00:33:24.315 "data_offset": 2048, 00:33:24.315 "data_size": 63488 00:33:24.315 } 00:33:24.315 ] 00:33:24.315 }' 00:33:24.315 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:24.315 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:24.315 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:24.315 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:24.315 11:44:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:24.573 11:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:24.831 [2024-07-13 11:44:59.501800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:24.832 [2024-07-13 11:44:59.501882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:24.832 [2024-07-13 11:44:59.501924] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:33:24.832 [2024-07-13 11:44:59.501949] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:24.832 [2024-07-13 11:44:59.502469] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:24.832 [2024-07-13 11:44:59.502507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:24.832 [2024-07-13 11:44:59.502620] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:24.832 [2024-07-13 11:44:59.502638] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:24.832 [2024-07-13 11:44:59.502646] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:24.832 BaseBdev1 00:33:24.832 11:44:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:25.765 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:26.023 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.023 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.023 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:26.023 "name": "raid_bdev1", 00:33:26.023 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:26.023 "strip_size_kb": 64, 00:33:26.023 "state": "online", 00:33:26.023 "raid_level": "raid5f", 00:33:26.023 "superblock": true, 00:33:26.023 "num_base_bdevs": 3, 00:33:26.023 "num_base_bdevs_discovered": 2, 00:33:26.023 "num_base_bdevs_operational": 2, 00:33:26.023 "base_bdevs_list": [ 00:33:26.023 { 00:33:26.023 "name": null, 00:33:26.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:26.023 "is_configured": false, 00:33:26.023 "data_offset": 2048, 00:33:26.023 "data_size": 63488 00:33:26.023 }, 00:33:26.023 { 00:33:26.023 "name": "BaseBdev2", 00:33:26.023 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:26.023 "is_configured": true, 00:33:26.023 "data_offset": 2048, 00:33:26.023 "data_size": 63488 00:33:26.023 }, 00:33:26.023 { 00:33:26.023 "name": "BaseBdev3", 00:33:26.023 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:26.023 "is_configured": true, 00:33:26.023 "data_offset": 2048, 00:33:26.023 "data_size": 63488 00:33:26.023 } 00:33:26.023 ] 00:33:26.023 }' 00:33:26.023 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:26.023 11:45:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:26.956 "name": "raid_bdev1", 00:33:26.956 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:26.956 "strip_size_kb": 64, 00:33:26.956 "state": "online", 00:33:26.956 "raid_level": "raid5f", 00:33:26.956 "superblock": true, 00:33:26.956 "num_base_bdevs": 3, 00:33:26.956 "num_base_bdevs_discovered": 2, 00:33:26.956 "num_base_bdevs_operational": 2, 00:33:26.956 "base_bdevs_list": [ 00:33:26.956 { 00:33:26.956 "name": null, 00:33:26.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:26.956 "is_configured": false, 00:33:26.956 "data_offset": 2048, 00:33:26.956 "data_size": 63488 00:33:26.956 }, 00:33:26.956 { 00:33:26.956 "name": "BaseBdev2", 00:33:26.956 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:26.956 "is_configured": true, 00:33:26.956 "data_offset": 2048, 00:33:26.956 "data_size": 63488 00:33:26.956 }, 00:33:26.956 { 00:33:26.956 "name": "BaseBdev3", 00:33:26.956 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:26.956 "is_configured": true, 00:33:26.956 "data_offset": 2048, 00:33:26.956 "data_size": 63488 00:33:26.956 } 00:33:26.956 ] 00:33:26.956 }' 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:26.956 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:27.213 [2024-07-13 11:45:01.912379] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:27.213 [2024-07-13 11:45:01.912486] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:27.213 [2024-07-13 11:45:01.912500] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:27.213 request: 00:33:27.213 { 00:33:27.213 "base_bdev": "BaseBdev1", 00:33:27.213 "raid_bdev": "raid_bdev1", 00:33:27.213 "method": "bdev_raid_add_base_bdev", 00:33:27.213 "req_id": 1 00:33:27.213 } 00:33:27.213 Got JSON-RPC error response 00:33:27.213 response: 00:33:27.213 { 00:33:27.213 "code": -22, 00:33:27.213 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:27.213 } 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:27.213 11:45:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.585 11:45:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:28.585 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:28.585 "name": "raid_bdev1", 00:33:28.585 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:28.585 "strip_size_kb": 64, 00:33:28.585 "state": "online", 00:33:28.585 "raid_level": "raid5f", 00:33:28.585 "superblock": true, 00:33:28.585 "num_base_bdevs": 3, 00:33:28.585 "num_base_bdevs_discovered": 2, 00:33:28.585 "num_base_bdevs_operational": 2, 00:33:28.585 "base_bdevs_list": [ 00:33:28.585 { 00:33:28.585 "name": null, 00:33:28.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:28.585 "is_configured": false, 00:33:28.585 "data_offset": 2048, 00:33:28.585 "data_size": 63488 00:33:28.585 }, 00:33:28.585 { 00:33:28.585 "name": "BaseBdev2", 00:33:28.585 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:28.585 "is_configured": true, 00:33:28.585 "data_offset": 2048, 00:33:28.585 "data_size": 63488 00:33:28.585 }, 00:33:28.585 { 00:33:28.585 "name": "BaseBdev3", 00:33:28.585 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:28.585 "is_configured": true, 00:33:28.585 "data_offset": 2048, 00:33:28.585 "data_size": 63488 00:33:28.585 } 00:33:28.585 ] 00:33:28.585 }' 00:33:28.585 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:28.585 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:29.150 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:29.150 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:29.150 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:29.150 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:29.150 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:29.150 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.150 11:45:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.409 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:29.409 "name": "raid_bdev1", 00:33:29.409 "uuid": "3f423c9f-5897-4bca-93ea-476c549cb290", 00:33:29.409 "strip_size_kb": 64, 00:33:29.409 "state": "online", 00:33:29.409 "raid_level": "raid5f", 00:33:29.409 "superblock": true, 00:33:29.409 "num_base_bdevs": 3, 00:33:29.409 "num_base_bdevs_discovered": 2, 00:33:29.409 "num_base_bdevs_operational": 2, 00:33:29.409 "base_bdevs_list": [ 00:33:29.409 { 00:33:29.409 "name": null, 00:33:29.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:29.409 "is_configured": false, 00:33:29.409 "data_offset": 2048, 00:33:29.409 "data_size": 63488 00:33:29.409 }, 00:33:29.409 { 00:33:29.409 "name": "BaseBdev2", 00:33:29.409 "uuid": "5cb4053c-05dd-5d5b-9410-f85b253d51ba", 00:33:29.409 "is_configured": true, 00:33:29.409 "data_offset": 2048, 00:33:29.409 "data_size": 63488 00:33:29.409 }, 00:33:29.409 { 00:33:29.409 "name": "BaseBdev3", 00:33:29.409 "uuid": "1abf0d35-7cb6-5d55-837c-d8ebc862e98a", 00:33:29.409 "is_configured": true, 00:33:29.409 "data_offset": 2048, 00:33:29.409 "data_size": 63488 00:33:29.409 } 00:33:29.409 ] 00:33:29.409 }' 00:33:29.409 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:29.409 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:29.409 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 154418 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 154418 ']' 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 154418 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 154418 00:33:29.667 killing process with pid 154418 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 154418' 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 154418 00:33:29.667 11:45:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 154418 00:33:29.667 Received shutdown signal, test time was about 60.000000 seconds 00:33:29.667 00:33:29.667 Latency(us) 00:33:29.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.667 =================================================================================================================== 00:33:29.667 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:29.667 [2024-07-13 11:45:04.191995] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:29.667 [2024-07-13 11:45:04.192096] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:29.667 [2024-07-13 11:45:04.192157] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:29.667 [2024-07-13 11:45:04.192168] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b480 name raid_bdev1, state offline 00:33:29.925 [2024-07-13 11:45:04.457189] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:30.861 ************************************ 00:33:30.861 END TEST raid5f_rebuild_test_sb 00:33:30.861 ************************************ 00:33:30.861 11:45:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:33:30.861 00:33:30.861 real 0m35.870s 00:33:30.861 user 0m57.104s 00:33:30.861 sys 0m3.512s 00:33:30.861 11:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:30.861 11:45:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:30.861 11:45:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:30.861 11:45:05 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:33:30.861 11:45:05 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:33:30.861 11:45:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:33:30.861 11:45:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:30.861 11:45:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:30.861 ************************************ 00:33:30.861 START TEST raid5f_state_function_test 00:33:30.861 ************************************ 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 false 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=155411 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 155411' 00:33:30.861 Process raid pid: 155411 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 155411 /var/tmp/spdk-raid.sock 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 155411 ']' 00:33:30.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:30.861 11:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:30.861 [2024-07-13 11:45:05.605751] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:33:30.861 [2024-07-13 11:45:05.605949] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.119 [2024-07-13 11:45:05.775452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.377 [2024-07-13 11:45:05.958258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.636 [2024-07-13 11:45:06.147677] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:31.894 11:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:31.894 11:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:33:31.894 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:32.152 [2024-07-13 11:45:06.771034] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:32.152 [2024-07-13 11:45:06.771127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:32.152 [2024-07-13 11:45:06.771141] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:32.152 [2024-07-13 11:45:06.771164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:32.152 [2024-07-13 11:45:06.771173] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:32.152 [2024-07-13 11:45:06.771188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:32.152 [2024-07-13 11:45:06.771195] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:32.152 [2024-07-13 11:45:06.771218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.152 11:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:32.411 11:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.411 "name": "Existed_Raid", 00:33:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.411 "strip_size_kb": 64, 00:33:32.411 "state": "configuring", 00:33:32.411 "raid_level": "raid5f", 00:33:32.411 "superblock": false, 00:33:32.411 "num_base_bdevs": 4, 00:33:32.411 "num_base_bdevs_discovered": 0, 00:33:32.411 "num_base_bdevs_operational": 4, 00:33:32.411 "base_bdevs_list": [ 00:33:32.411 { 00:33:32.411 "name": "BaseBdev1", 00:33:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.411 "is_configured": false, 00:33:32.411 "data_offset": 0, 00:33:32.411 "data_size": 0 00:33:32.411 }, 00:33:32.411 { 00:33:32.411 "name": "BaseBdev2", 00:33:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.411 "is_configured": false, 00:33:32.411 "data_offset": 0, 00:33:32.411 "data_size": 0 00:33:32.411 }, 00:33:32.411 { 00:33:32.411 "name": "BaseBdev3", 00:33:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.411 "is_configured": false, 00:33:32.411 "data_offset": 0, 00:33:32.411 "data_size": 0 00:33:32.411 }, 00:33:32.411 { 00:33:32.411 "name": "BaseBdev4", 00:33:32.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.411 "is_configured": false, 00:33:32.411 "data_offset": 0, 00:33:32.411 "data_size": 0 00:33:32.411 } 00:33:32.411 ] 00:33:32.411 }' 00:33:32.411 11:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.411 11:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:32.979 11:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:33.237 [2024-07-13 11:45:07.907148] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:33.237 [2024-07-13 11:45:07.907178] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:33:33.237 11:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:33.495 [2024-07-13 11:45:08.091187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:33.495 [2024-07-13 11:45:08.091233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:33.495 [2024-07-13 11:45:08.091243] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:33.495 [2024-07-13 11:45:08.091283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:33.495 [2024-07-13 11:45:08.091293] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:33.495 [2024-07-13 11:45:08.091323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:33.495 [2024-07-13 11:45:08.091331] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:33.495 [2024-07-13 11:45:08.091352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:33.495 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:33.754 [2024-07-13 11:45:08.316529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:33.754 BaseBdev1 00:33:33.754 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:33.754 11:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:33:33.754 11:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:33.754 11:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:33.754 11:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:33.754 11:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:33.754 11:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:33.754 11:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:34.013 [ 00:33:34.013 { 00:33:34.013 "name": "BaseBdev1", 00:33:34.013 "aliases": [ 00:33:34.013 "e7ce9fa9-672b-467b-9fc5-4c6aefd08073" 00:33:34.013 ], 00:33:34.013 "product_name": "Malloc disk", 00:33:34.013 "block_size": 512, 00:33:34.013 "num_blocks": 65536, 00:33:34.013 "uuid": "e7ce9fa9-672b-467b-9fc5-4c6aefd08073", 00:33:34.013 "assigned_rate_limits": { 00:33:34.013 "rw_ios_per_sec": 0, 00:33:34.013 "rw_mbytes_per_sec": 0, 00:33:34.013 "r_mbytes_per_sec": 0, 00:33:34.013 "w_mbytes_per_sec": 0 00:33:34.013 }, 00:33:34.013 "claimed": true, 00:33:34.013 "claim_type": "exclusive_write", 00:33:34.013 "zoned": false, 00:33:34.013 "supported_io_types": { 00:33:34.013 "read": true, 00:33:34.013 "write": true, 00:33:34.013 "unmap": true, 00:33:34.013 "flush": true, 00:33:34.013 "reset": true, 00:33:34.013 "nvme_admin": false, 00:33:34.013 "nvme_io": false, 00:33:34.013 "nvme_io_md": false, 00:33:34.013 "write_zeroes": true, 00:33:34.013 "zcopy": true, 00:33:34.013 "get_zone_info": false, 00:33:34.013 "zone_management": false, 00:33:34.013 "zone_append": false, 00:33:34.013 "compare": false, 00:33:34.013 "compare_and_write": false, 00:33:34.013 "abort": true, 00:33:34.013 "seek_hole": false, 00:33:34.013 "seek_data": false, 00:33:34.013 "copy": true, 00:33:34.013 "nvme_iov_md": false 00:33:34.013 }, 00:33:34.013 "memory_domains": [ 00:33:34.013 { 00:33:34.013 "dma_device_id": "system", 00:33:34.013 "dma_device_type": 1 00:33:34.013 }, 00:33:34.013 { 00:33:34.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:34.013 "dma_device_type": 2 00:33:34.013 } 00:33:34.013 ], 00:33:34.013 "driver_specific": {} 00:33:34.013 } 00:33:34.013 ] 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.013 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:34.272 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.272 "name": "Existed_Raid", 00:33:34.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.272 "strip_size_kb": 64, 00:33:34.272 "state": "configuring", 00:33:34.272 "raid_level": "raid5f", 00:33:34.272 "superblock": false, 00:33:34.272 "num_base_bdevs": 4, 00:33:34.272 "num_base_bdevs_discovered": 1, 00:33:34.272 "num_base_bdevs_operational": 4, 00:33:34.272 "base_bdevs_list": [ 00:33:34.272 { 00:33:34.272 "name": "BaseBdev1", 00:33:34.272 "uuid": "e7ce9fa9-672b-467b-9fc5-4c6aefd08073", 00:33:34.272 "is_configured": true, 00:33:34.272 "data_offset": 0, 00:33:34.272 "data_size": 65536 00:33:34.272 }, 00:33:34.272 { 00:33:34.272 "name": "BaseBdev2", 00:33:34.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.272 "is_configured": false, 00:33:34.272 "data_offset": 0, 00:33:34.272 "data_size": 0 00:33:34.272 }, 00:33:34.272 { 00:33:34.272 "name": "BaseBdev3", 00:33:34.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.272 "is_configured": false, 00:33:34.272 "data_offset": 0, 00:33:34.272 "data_size": 0 00:33:34.272 }, 00:33:34.272 { 00:33:34.272 "name": "BaseBdev4", 00:33:34.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.272 "is_configured": false, 00:33:34.272 "data_offset": 0, 00:33:34.272 "data_size": 0 00:33:34.272 } 00:33:34.272 ] 00:33:34.272 }' 00:33:34.272 11:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.272 11:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.840 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:35.098 [2024-07-13 11:45:09.744826] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:35.098 [2024-07-13 11:45:09.744860] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:33:35.098 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:35.357 [2024-07-13 11:45:09.928887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:35.357 [2024-07-13 11:45:09.930755] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:35.357 [2024-07-13 11:45:09.930807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:35.357 [2024-07-13 11:45:09.930818] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:35.357 [2024-07-13 11:45:09.930843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:35.357 [2024-07-13 11:45:09.930866] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:33:35.357 [2024-07-13 11:45:09.930897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:35.357 11:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:35.615 11:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:35.615 "name": "Existed_Raid", 00:33:35.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.615 "strip_size_kb": 64, 00:33:35.615 "state": "configuring", 00:33:35.615 "raid_level": "raid5f", 00:33:35.615 "superblock": false, 00:33:35.615 "num_base_bdevs": 4, 00:33:35.615 "num_base_bdevs_discovered": 1, 00:33:35.615 "num_base_bdevs_operational": 4, 00:33:35.615 "base_bdevs_list": [ 00:33:35.616 { 00:33:35.616 "name": "BaseBdev1", 00:33:35.616 "uuid": "e7ce9fa9-672b-467b-9fc5-4c6aefd08073", 00:33:35.616 "is_configured": true, 00:33:35.616 "data_offset": 0, 00:33:35.616 "data_size": 65536 00:33:35.616 }, 00:33:35.616 { 00:33:35.616 "name": "BaseBdev2", 00:33:35.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.616 "is_configured": false, 00:33:35.616 "data_offset": 0, 00:33:35.616 "data_size": 0 00:33:35.616 }, 00:33:35.616 { 00:33:35.616 "name": "BaseBdev3", 00:33:35.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.616 "is_configured": false, 00:33:35.616 "data_offset": 0, 00:33:35.616 "data_size": 0 00:33:35.616 }, 00:33:35.616 { 00:33:35.616 "name": "BaseBdev4", 00:33:35.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.616 "is_configured": false, 00:33:35.616 "data_offset": 0, 00:33:35.616 "data_size": 0 00:33:35.616 } 00:33:35.616 ] 00:33:35.616 }' 00:33:35.616 11:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:35.616 11:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:36.182 11:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:36.441 [2024-07-13 11:45:11.067752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:36.441 BaseBdev2 00:33:36.441 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:36.441 11:45:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:33:36.441 11:45:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:36.441 11:45:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:36.441 11:45:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:36.441 11:45:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:36.441 11:45:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:36.699 11:45:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:36.699 [ 00:33:36.699 { 00:33:36.699 "name": "BaseBdev2", 00:33:36.699 "aliases": [ 00:33:36.699 "b387d1b8-6b06-4dd8-8a08-aae8d75eebd9" 00:33:36.699 ], 00:33:36.699 "product_name": "Malloc disk", 00:33:36.699 "block_size": 512, 00:33:36.699 "num_blocks": 65536, 00:33:36.699 "uuid": "b387d1b8-6b06-4dd8-8a08-aae8d75eebd9", 00:33:36.699 "assigned_rate_limits": { 00:33:36.699 "rw_ios_per_sec": 0, 00:33:36.699 "rw_mbytes_per_sec": 0, 00:33:36.699 "r_mbytes_per_sec": 0, 00:33:36.699 "w_mbytes_per_sec": 0 00:33:36.699 }, 00:33:36.699 "claimed": true, 00:33:36.699 "claim_type": "exclusive_write", 00:33:36.700 "zoned": false, 00:33:36.700 "supported_io_types": { 00:33:36.700 "read": true, 00:33:36.700 "write": true, 00:33:36.700 "unmap": true, 00:33:36.700 "flush": true, 00:33:36.700 "reset": true, 00:33:36.700 "nvme_admin": false, 00:33:36.700 "nvme_io": false, 00:33:36.700 "nvme_io_md": false, 00:33:36.700 "write_zeroes": true, 00:33:36.700 "zcopy": true, 00:33:36.700 "get_zone_info": false, 00:33:36.700 "zone_management": false, 00:33:36.700 "zone_append": false, 00:33:36.700 "compare": false, 00:33:36.700 "compare_and_write": false, 00:33:36.700 "abort": true, 00:33:36.700 "seek_hole": false, 00:33:36.700 "seek_data": false, 00:33:36.700 "copy": true, 00:33:36.700 "nvme_iov_md": false 00:33:36.700 }, 00:33:36.700 "memory_domains": [ 00:33:36.700 { 00:33:36.700 "dma_device_id": "system", 00:33:36.700 "dma_device_type": 1 00:33:36.700 }, 00:33:36.700 { 00:33:36.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:36.700 "dma_device_type": 2 00:33:36.700 } 00:33:36.700 ], 00:33:36.700 "driver_specific": {} 00:33:36.700 } 00:33:36.700 ] 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:36.700 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:36.958 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.958 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:37.217 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:37.217 "name": "Existed_Raid", 00:33:37.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.217 "strip_size_kb": 64, 00:33:37.217 "state": "configuring", 00:33:37.217 "raid_level": "raid5f", 00:33:37.217 "superblock": false, 00:33:37.217 "num_base_bdevs": 4, 00:33:37.217 "num_base_bdevs_discovered": 2, 00:33:37.217 "num_base_bdevs_operational": 4, 00:33:37.217 "base_bdevs_list": [ 00:33:37.217 { 00:33:37.217 "name": "BaseBdev1", 00:33:37.217 "uuid": "e7ce9fa9-672b-467b-9fc5-4c6aefd08073", 00:33:37.217 "is_configured": true, 00:33:37.217 "data_offset": 0, 00:33:37.217 "data_size": 65536 00:33:37.217 }, 00:33:37.217 { 00:33:37.217 "name": "BaseBdev2", 00:33:37.217 "uuid": "b387d1b8-6b06-4dd8-8a08-aae8d75eebd9", 00:33:37.217 "is_configured": true, 00:33:37.217 "data_offset": 0, 00:33:37.217 "data_size": 65536 00:33:37.217 }, 00:33:37.217 { 00:33:37.217 "name": "BaseBdev3", 00:33:37.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.217 "is_configured": false, 00:33:37.217 "data_offset": 0, 00:33:37.217 "data_size": 0 00:33:37.217 }, 00:33:37.217 { 00:33:37.217 "name": "BaseBdev4", 00:33:37.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.217 "is_configured": false, 00:33:37.217 "data_offset": 0, 00:33:37.217 "data_size": 0 00:33:37.217 } 00:33:37.217 ] 00:33:37.217 }' 00:33:37.217 11:45:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:37.217 11:45:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:37.783 11:45:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:38.042 [2024-07-13 11:45:12.607050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:38.042 BaseBdev3 00:33:38.042 11:45:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:33:38.042 11:45:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:33:38.042 11:45:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:38.042 11:45:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:38.042 11:45:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:38.042 11:45:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:38.042 11:45:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:38.299 11:45:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:38.299 [ 00:33:38.299 { 00:33:38.299 "name": "BaseBdev3", 00:33:38.299 "aliases": [ 00:33:38.299 "cfa6c760-ae44-472e-a362-61763a43ff26" 00:33:38.299 ], 00:33:38.299 "product_name": "Malloc disk", 00:33:38.299 "block_size": 512, 00:33:38.299 "num_blocks": 65536, 00:33:38.299 "uuid": "cfa6c760-ae44-472e-a362-61763a43ff26", 00:33:38.299 "assigned_rate_limits": { 00:33:38.299 "rw_ios_per_sec": 0, 00:33:38.299 "rw_mbytes_per_sec": 0, 00:33:38.299 "r_mbytes_per_sec": 0, 00:33:38.299 "w_mbytes_per_sec": 0 00:33:38.299 }, 00:33:38.299 "claimed": true, 00:33:38.299 "claim_type": "exclusive_write", 00:33:38.299 "zoned": false, 00:33:38.299 "supported_io_types": { 00:33:38.299 "read": true, 00:33:38.299 "write": true, 00:33:38.299 "unmap": true, 00:33:38.299 "flush": true, 00:33:38.299 "reset": true, 00:33:38.299 "nvme_admin": false, 00:33:38.299 "nvme_io": false, 00:33:38.299 "nvme_io_md": false, 00:33:38.299 "write_zeroes": true, 00:33:38.299 "zcopy": true, 00:33:38.299 "get_zone_info": false, 00:33:38.299 "zone_management": false, 00:33:38.299 "zone_append": false, 00:33:38.299 "compare": false, 00:33:38.299 "compare_and_write": false, 00:33:38.299 "abort": true, 00:33:38.299 "seek_hole": false, 00:33:38.299 "seek_data": false, 00:33:38.299 "copy": true, 00:33:38.299 "nvme_iov_md": false 00:33:38.299 }, 00:33:38.299 "memory_domains": [ 00:33:38.299 { 00:33:38.299 "dma_device_id": "system", 00:33:38.299 "dma_device_type": 1 00:33:38.299 }, 00:33:38.299 { 00:33:38.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:38.299 "dma_device_type": 2 00:33:38.299 } 00:33:38.299 ], 00:33:38.299 "driver_specific": {} 00:33:38.299 } 00:33:38.299 ] 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:38.556 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:38.814 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:38.814 "name": "Existed_Raid", 00:33:38.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:38.814 "strip_size_kb": 64, 00:33:38.814 "state": "configuring", 00:33:38.814 "raid_level": "raid5f", 00:33:38.814 "superblock": false, 00:33:38.814 "num_base_bdevs": 4, 00:33:38.814 "num_base_bdevs_discovered": 3, 00:33:38.814 "num_base_bdevs_operational": 4, 00:33:38.814 "base_bdevs_list": [ 00:33:38.814 { 00:33:38.814 "name": "BaseBdev1", 00:33:38.814 "uuid": "e7ce9fa9-672b-467b-9fc5-4c6aefd08073", 00:33:38.814 "is_configured": true, 00:33:38.814 "data_offset": 0, 00:33:38.814 "data_size": 65536 00:33:38.814 }, 00:33:38.814 { 00:33:38.814 "name": "BaseBdev2", 00:33:38.814 "uuid": "b387d1b8-6b06-4dd8-8a08-aae8d75eebd9", 00:33:38.814 "is_configured": true, 00:33:38.814 "data_offset": 0, 00:33:38.814 "data_size": 65536 00:33:38.814 }, 00:33:38.814 { 00:33:38.814 "name": "BaseBdev3", 00:33:38.814 "uuid": "cfa6c760-ae44-472e-a362-61763a43ff26", 00:33:38.814 "is_configured": true, 00:33:38.814 "data_offset": 0, 00:33:38.814 "data_size": 65536 00:33:38.814 }, 00:33:38.814 { 00:33:38.814 "name": "BaseBdev4", 00:33:38.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:38.814 "is_configured": false, 00:33:38.814 "data_offset": 0, 00:33:38.814 "data_size": 0 00:33:38.814 } 00:33:38.814 ] 00:33:38.814 }' 00:33:38.814 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:38.814 11:45:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:39.381 11:45:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:39.639 [2024-07-13 11:45:14.238882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:39.639 [2024-07-13 11:45:14.238964] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:33:39.639 [2024-07-13 11:45:14.238976] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:39.639 [2024-07-13 11:45:14.239089] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:33:39.639 [2024-07-13 11:45:14.244660] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:33:39.639 [2024-07-13 11:45:14.244684] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:33:39.639 [2024-07-13 11:45:14.244944] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:39.639 BaseBdev4 00:33:39.639 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:33:39.639 11:45:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:33:39.639 11:45:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:39.639 11:45:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:39.639 11:45:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:39.639 11:45:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:39.639 11:45:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:39.897 11:45:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:40.155 [ 00:33:40.155 { 00:33:40.155 "name": "BaseBdev4", 00:33:40.155 "aliases": [ 00:33:40.155 "0cf69328-aaeb-496a-9898-174234318d48" 00:33:40.155 ], 00:33:40.155 "product_name": "Malloc disk", 00:33:40.155 "block_size": 512, 00:33:40.155 "num_blocks": 65536, 00:33:40.155 "uuid": "0cf69328-aaeb-496a-9898-174234318d48", 00:33:40.155 "assigned_rate_limits": { 00:33:40.155 "rw_ios_per_sec": 0, 00:33:40.155 "rw_mbytes_per_sec": 0, 00:33:40.155 "r_mbytes_per_sec": 0, 00:33:40.155 "w_mbytes_per_sec": 0 00:33:40.155 }, 00:33:40.155 "claimed": true, 00:33:40.155 "claim_type": "exclusive_write", 00:33:40.155 "zoned": false, 00:33:40.155 "supported_io_types": { 00:33:40.155 "read": true, 00:33:40.155 "write": true, 00:33:40.155 "unmap": true, 00:33:40.155 "flush": true, 00:33:40.155 "reset": true, 00:33:40.155 "nvme_admin": false, 00:33:40.155 "nvme_io": false, 00:33:40.155 "nvme_io_md": false, 00:33:40.155 "write_zeroes": true, 00:33:40.155 "zcopy": true, 00:33:40.155 "get_zone_info": false, 00:33:40.155 "zone_management": false, 00:33:40.155 "zone_append": false, 00:33:40.155 "compare": false, 00:33:40.155 "compare_and_write": false, 00:33:40.155 "abort": true, 00:33:40.155 "seek_hole": false, 00:33:40.155 "seek_data": false, 00:33:40.155 "copy": true, 00:33:40.155 "nvme_iov_md": false 00:33:40.155 }, 00:33:40.155 "memory_domains": [ 00:33:40.155 { 00:33:40.155 "dma_device_id": "system", 00:33:40.155 "dma_device_type": 1 00:33:40.155 }, 00:33:40.155 { 00:33:40.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:40.155 "dma_device_type": 2 00:33:40.155 } 00:33:40.155 ], 00:33:40.155 "driver_specific": {} 00:33:40.155 } 00:33:40.155 ] 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.156 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:40.414 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:40.414 "name": "Existed_Raid", 00:33:40.414 "uuid": "92f6a045-2ea9-419e-9b98-753aa2e924c7", 00:33:40.414 "strip_size_kb": 64, 00:33:40.414 "state": "online", 00:33:40.414 "raid_level": "raid5f", 00:33:40.414 "superblock": false, 00:33:40.414 "num_base_bdevs": 4, 00:33:40.414 "num_base_bdevs_discovered": 4, 00:33:40.414 "num_base_bdevs_operational": 4, 00:33:40.414 "base_bdevs_list": [ 00:33:40.414 { 00:33:40.414 "name": "BaseBdev1", 00:33:40.414 "uuid": "e7ce9fa9-672b-467b-9fc5-4c6aefd08073", 00:33:40.414 "is_configured": true, 00:33:40.414 "data_offset": 0, 00:33:40.414 "data_size": 65536 00:33:40.414 }, 00:33:40.414 { 00:33:40.414 "name": "BaseBdev2", 00:33:40.414 "uuid": "b387d1b8-6b06-4dd8-8a08-aae8d75eebd9", 00:33:40.414 "is_configured": true, 00:33:40.414 "data_offset": 0, 00:33:40.414 "data_size": 65536 00:33:40.414 }, 00:33:40.414 { 00:33:40.414 "name": "BaseBdev3", 00:33:40.414 "uuid": "cfa6c760-ae44-472e-a362-61763a43ff26", 00:33:40.414 "is_configured": true, 00:33:40.414 "data_offset": 0, 00:33:40.414 "data_size": 65536 00:33:40.414 }, 00:33:40.414 { 00:33:40.414 "name": "BaseBdev4", 00:33:40.414 "uuid": "0cf69328-aaeb-496a-9898-174234318d48", 00:33:40.414 "is_configured": true, 00:33:40.414 "data_offset": 0, 00:33:40.414 "data_size": 65536 00:33:40.414 } 00:33:40.414 ] 00:33:40.414 }' 00:33:40.414 11:45:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:40.414 11:45:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:40.981 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:40.981 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:40.981 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:40.981 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:40.981 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:40.981 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:40.981 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:40.981 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:41.239 [2024-07-13 11:45:15.759666] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:41.239 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:41.239 "name": "Existed_Raid", 00:33:41.239 "aliases": [ 00:33:41.239 "92f6a045-2ea9-419e-9b98-753aa2e924c7" 00:33:41.239 ], 00:33:41.239 "product_name": "Raid Volume", 00:33:41.239 "block_size": 512, 00:33:41.239 "num_blocks": 196608, 00:33:41.239 "uuid": "92f6a045-2ea9-419e-9b98-753aa2e924c7", 00:33:41.239 "assigned_rate_limits": { 00:33:41.239 "rw_ios_per_sec": 0, 00:33:41.239 "rw_mbytes_per_sec": 0, 00:33:41.239 "r_mbytes_per_sec": 0, 00:33:41.239 "w_mbytes_per_sec": 0 00:33:41.239 }, 00:33:41.239 "claimed": false, 00:33:41.239 "zoned": false, 00:33:41.239 "supported_io_types": { 00:33:41.239 "read": true, 00:33:41.239 "write": true, 00:33:41.239 "unmap": false, 00:33:41.239 "flush": false, 00:33:41.239 "reset": true, 00:33:41.239 "nvme_admin": false, 00:33:41.239 "nvme_io": false, 00:33:41.239 "nvme_io_md": false, 00:33:41.239 "write_zeroes": true, 00:33:41.239 "zcopy": false, 00:33:41.239 "get_zone_info": false, 00:33:41.239 "zone_management": false, 00:33:41.239 "zone_append": false, 00:33:41.239 "compare": false, 00:33:41.239 "compare_and_write": false, 00:33:41.239 "abort": false, 00:33:41.239 "seek_hole": false, 00:33:41.239 "seek_data": false, 00:33:41.239 "copy": false, 00:33:41.239 "nvme_iov_md": false 00:33:41.239 }, 00:33:41.239 "driver_specific": { 00:33:41.239 "raid": { 00:33:41.239 "uuid": "92f6a045-2ea9-419e-9b98-753aa2e924c7", 00:33:41.239 "strip_size_kb": 64, 00:33:41.239 "state": "online", 00:33:41.239 "raid_level": "raid5f", 00:33:41.239 "superblock": false, 00:33:41.239 "num_base_bdevs": 4, 00:33:41.239 "num_base_bdevs_discovered": 4, 00:33:41.239 "num_base_bdevs_operational": 4, 00:33:41.239 "base_bdevs_list": [ 00:33:41.239 { 00:33:41.239 "name": "BaseBdev1", 00:33:41.239 "uuid": "e7ce9fa9-672b-467b-9fc5-4c6aefd08073", 00:33:41.239 "is_configured": true, 00:33:41.239 "data_offset": 0, 00:33:41.239 "data_size": 65536 00:33:41.239 }, 00:33:41.239 { 00:33:41.239 "name": "BaseBdev2", 00:33:41.239 "uuid": "b387d1b8-6b06-4dd8-8a08-aae8d75eebd9", 00:33:41.239 "is_configured": true, 00:33:41.239 "data_offset": 0, 00:33:41.239 "data_size": 65536 00:33:41.239 }, 00:33:41.239 { 00:33:41.239 "name": "BaseBdev3", 00:33:41.239 "uuid": "cfa6c760-ae44-472e-a362-61763a43ff26", 00:33:41.239 "is_configured": true, 00:33:41.239 "data_offset": 0, 00:33:41.239 "data_size": 65536 00:33:41.239 }, 00:33:41.239 { 00:33:41.239 "name": "BaseBdev4", 00:33:41.239 "uuid": "0cf69328-aaeb-496a-9898-174234318d48", 00:33:41.239 "is_configured": true, 00:33:41.239 "data_offset": 0, 00:33:41.239 "data_size": 65536 00:33:41.239 } 00:33:41.239 ] 00:33:41.239 } 00:33:41.239 } 00:33:41.239 }' 00:33:41.239 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:41.239 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:41.239 BaseBdev2 00:33:41.239 BaseBdev3 00:33:41.239 BaseBdev4' 00:33:41.239 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:41.239 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:41.239 11:45:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:41.497 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:41.497 "name": "BaseBdev1", 00:33:41.497 "aliases": [ 00:33:41.497 "e7ce9fa9-672b-467b-9fc5-4c6aefd08073" 00:33:41.497 ], 00:33:41.497 "product_name": "Malloc disk", 00:33:41.497 "block_size": 512, 00:33:41.497 "num_blocks": 65536, 00:33:41.497 "uuid": "e7ce9fa9-672b-467b-9fc5-4c6aefd08073", 00:33:41.497 "assigned_rate_limits": { 00:33:41.497 "rw_ios_per_sec": 0, 00:33:41.497 "rw_mbytes_per_sec": 0, 00:33:41.497 "r_mbytes_per_sec": 0, 00:33:41.497 "w_mbytes_per_sec": 0 00:33:41.497 }, 00:33:41.497 "claimed": true, 00:33:41.497 "claim_type": "exclusive_write", 00:33:41.497 "zoned": false, 00:33:41.497 "supported_io_types": { 00:33:41.497 "read": true, 00:33:41.497 "write": true, 00:33:41.497 "unmap": true, 00:33:41.497 "flush": true, 00:33:41.497 "reset": true, 00:33:41.497 "nvme_admin": false, 00:33:41.497 "nvme_io": false, 00:33:41.497 "nvme_io_md": false, 00:33:41.497 "write_zeroes": true, 00:33:41.497 "zcopy": true, 00:33:41.497 "get_zone_info": false, 00:33:41.497 "zone_management": false, 00:33:41.497 "zone_append": false, 00:33:41.497 "compare": false, 00:33:41.497 "compare_and_write": false, 00:33:41.497 "abort": true, 00:33:41.497 "seek_hole": false, 00:33:41.497 "seek_data": false, 00:33:41.497 "copy": true, 00:33:41.497 "nvme_iov_md": false 00:33:41.497 }, 00:33:41.497 "memory_domains": [ 00:33:41.497 { 00:33:41.497 "dma_device_id": "system", 00:33:41.497 "dma_device_type": 1 00:33:41.497 }, 00:33:41.497 { 00:33:41.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:41.497 "dma_device_type": 2 00:33:41.497 } 00:33:41.497 ], 00:33:41.497 "driver_specific": {} 00:33:41.497 }' 00:33:41.498 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:41.498 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:41.498 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:41.498 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:41.498 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:41.756 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:41.756 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:41.756 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:41.756 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:41.756 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:41.756 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:42.015 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:42.015 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:42.015 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:42.015 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:42.274 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:42.274 "name": "BaseBdev2", 00:33:42.274 "aliases": [ 00:33:42.274 "b387d1b8-6b06-4dd8-8a08-aae8d75eebd9" 00:33:42.274 ], 00:33:42.274 "product_name": "Malloc disk", 00:33:42.274 "block_size": 512, 00:33:42.274 "num_blocks": 65536, 00:33:42.274 "uuid": "b387d1b8-6b06-4dd8-8a08-aae8d75eebd9", 00:33:42.274 "assigned_rate_limits": { 00:33:42.274 "rw_ios_per_sec": 0, 00:33:42.274 "rw_mbytes_per_sec": 0, 00:33:42.274 "r_mbytes_per_sec": 0, 00:33:42.274 "w_mbytes_per_sec": 0 00:33:42.274 }, 00:33:42.274 "claimed": true, 00:33:42.274 "claim_type": "exclusive_write", 00:33:42.274 "zoned": false, 00:33:42.274 "supported_io_types": { 00:33:42.274 "read": true, 00:33:42.274 "write": true, 00:33:42.274 "unmap": true, 00:33:42.274 "flush": true, 00:33:42.274 "reset": true, 00:33:42.274 "nvme_admin": false, 00:33:42.274 "nvme_io": false, 00:33:42.274 "nvme_io_md": false, 00:33:42.274 "write_zeroes": true, 00:33:42.274 "zcopy": true, 00:33:42.274 "get_zone_info": false, 00:33:42.274 "zone_management": false, 00:33:42.274 "zone_append": false, 00:33:42.274 "compare": false, 00:33:42.274 "compare_and_write": false, 00:33:42.274 "abort": true, 00:33:42.274 "seek_hole": false, 00:33:42.274 "seek_data": false, 00:33:42.274 "copy": true, 00:33:42.274 "nvme_iov_md": false 00:33:42.274 }, 00:33:42.274 "memory_domains": [ 00:33:42.274 { 00:33:42.274 "dma_device_id": "system", 00:33:42.274 "dma_device_type": 1 00:33:42.274 }, 00:33:42.274 { 00:33:42.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.274 "dma_device_type": 2 00:33:42.274 } 00:33:42.274 ], 00:33:42.274 "driver_specific": {} 00:33:42.274 }' 00:33:42.274 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:42.274 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:42.274 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:42.274 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:42.274 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:42.274 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:42.274 11:45:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:42.274 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:42.533 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:42.533 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:42.533 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:42.533 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:42.533 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:42.533 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:42.533 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:42.792 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:42.792 "name": "BaseBdev3", 00:33:42.792 "aliases": [ 00:33:42.792 "cfa6c760-ae44-472e-a362-61763a43ff26" 00:33:42.792 ], 00:33:42.792 "product_name": "Malloc disk", 00:33:42.792 "block_size": 512, 00:33:42.792 "num_blocks": 65536, 00:33:42.792 "uuid": "cfa6c760-ae44-472e-a362-61763a43ff26", 00:33:42.792 "assigned_rate_limits": { 00:33:42.792 "rw_ios_per_sec": 0, 00:33:42.792 "rw_mbytes_per_sec": 0, 00:33:42.792 "r_mbytes_per_sec": 0, 00:33:42.792 "w_mbytes_per_sec": 0 00:33:42.792 }, 00:33:42.792 "claimed": true, 00:33:42.792 "claim_type": "exclusive_write", 00:33:42.792 "zoned": false, 00:33:42.792 "supported_io_types": { 00:33:42.792 "read": true, 00:33:42.792 "write": true, 00:33:42.792 "unmap": true, 00:33:42.792 "flush": true, 00:33:42.792 "reset": true, 00:33:42.792 "nvme_admin": false, 00:33:42.792 "nvme_io": false, 00:33:42.792 "nvme_io_md": false, 00:33:42.792 "write_zeroes": true, 00:33:42.792 "zcopy": true, 00:33:42.792 "get_zone_info": false, 00:33:42.792 "zone_management": false, 00:33:42.792 "zone_append": false, 00:33:42.792 "compare": false, 00:33:42.792 "compare_and_write": false, 00:33:42.792 "abort": true, 00:33:42.792 "seek_hole": false, 00:33:42.792 "seek_data": false, 00:33:42.792 "copy": true, 00:33:42.792 "nvme_iov_md": false 00:33:42.792 }, 00:33:42.792 "memory_domains": [ 00:33:42.792 { 00:33:42.792 "dma_device_id": "system", 00:33:42.792 "dma_device_type": 1 00:33:42.792 }, 00:33:42.792 { 00:33:42.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:42.792 "dma_device_type": 2 00:33:42.792 } 00:33:42.793 ], 00:33:42.793 "driver_specific": {} 00:33:42.793 }' 00:33:42.793 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:42.793 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:42.793 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:42.793 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:43.052 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:43.052 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:43.052 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:43.052 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:43.052 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:43.052 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:43.052 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:43.311 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:43.311 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:43.311 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:43.311 11:45:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:43.311 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:43.311 "name": "BaseBdev4", 00:33:43.311 "aliases": [ 00:33:43.311 "0cf69328-aaeb-496a-9898-174234318d48" 00:33:43.311 ], 00:33:43.311 "product_name": "Malloc disk", 00:33:43.311 "block_size": 512, 00:33:43.311 "num_blocks": 65536, 00:33:43.311 "uuid": "0cf69328-aaeb-496a-9898-174234318d48", 00:33:43.311 "assigned_rate_limits": { 00:33:43.311 "rw_ios_per_sec": 0, 00:33:43.311 "rw_mbytes_per_sec": 0, 00:33:43.311 "r_mbytes_per_sec": 0, 00:33:43.311 "w_mbytes_per_sec": 0 00:33:43.311 }, 00:33:43.311 "claimed": true, 00:33:43.311 "claim_type": "exclusive_write", 00:33:43.311 "zoned": false, 00:33:43.311 "supported_io_types": { 00:33:43.311 "read": true, 00:33:43.311 "write": true, 00:33:43.311 "unmap": true, 00:33:43.311 "flush": true, 00:33:43.311 "reset": true, 00:33:43.311 "nvme_admin": false, 00:33:43.311 "nvme_io": false, 00:33:43.311 "nvme_io_md": false, 00:33:43.311 "write_zeroes": true, 00:33:43.311 "zcopy": true, 00:33:43.311 "get_zone_info": false, 00:33:43.311 "zone_management": false, 00:33:43.311 "zone_append": false, 00:33:43.311 "compare": false, 00:33:43.311 "compare_and_write": false, 00:33:43.311 "abort": true, 00:33:43.311 "seek_hole": false, 00:33:43.311 "seek_data": false, 00:33:43.311 "copy": true, 00:33:43.311 "nvme_iov_md": false 00:33:43.311 }, 00:33:43.311 "memory_domains": [ 00:33:43.311 { 00:33:43.311 "dma_device_id": "system", 00:33:43.311 "dma_device_type": 1 00:33:43.311 }, 00:33:43.311 { 00:33:43.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:43.311 "dma_device_type": 2 00:33:43.311 } 00:33:43.311 ], 00:33:43.311 "driver_specific": {} 00:33:43.312 }' 00:33:43.312 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:43.570 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:43.570 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:43.570 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:43.570 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:43.570 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:43.570 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:43.570 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:43.830 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:43.830 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:43.830 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:43.830 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:43.830 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:44.089 [2024-07-13 11:45:18.608383] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.089 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:44.347 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:44.347 "name": "Existed_Raid", 00:33:44.347 "uuid": "92f6a045-2ea9-419e-9b98-753aa2e924c7", 00:33:44.347 "strip_size_kb": 64, 00:33:44.347 "state": "online", 00:33:44.347 "raid_level": "raid5f", 00:33:44.347 "superblock": false, 00:33:44.347 "num_base_bdevs": 4, 00:33:44.347 "num_base_bdevs_discovered": 3, 00:33:44.347 "num_base_bdevs_operational": 3, 00:33:44.347 "base_bdevs_list": [ 00:33:44.347 { 00:33:44.347 "name": null, 00:33:44.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.347 "is_configured": false, 00:33:44.347 "data_offset": 0, 00:33:44.347 "data_size": 65536 00:33:44.347 }, 00:33:44.347 { 00:33:44.347 "name": "BaseBdev2", 00:33:44.347 "uuid": "b387d1b8-6b06-4dd8-8a08-aae8d75eebd9", 00:33:44.347 "is_configured": true, 00:33:44.347 "data_offset": 0, 00:33:44.347 "data_size": 65536 00:33:44.347 }, 00:33:44.347 { 00:33:44.347 "name": "BaseBdev3", 00:33:44.347 "uuid": "cfa6c760-ae44-472e-a362-61763a43ff26", 00:33:44.347 "is_configured": true, 00:33:44.347 "data_offset": 0, 00:33:44.347 "data_size": 65536 00:33:44.347 }, 00:33:44.347 { 00:33:44.347 "name": "BaseBdev4", 00:33:44.347 "uuid": "0cf69328-aaeb-496a-9898-174234318d48", 00:33:44.347 "is_configured": true, 00:33:44.347 "data_offset": 0, 00:33:44.347 "data_size": 65536 00:33:44.347 } 00:33:44.347 ] 00:33:44.347 }' 00:33:44.347 11:45:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:44.347 11:45:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.914 11:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:44.914 11:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:44.914 11:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.914 11:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:45.173 11:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:45.173 11:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:45.173 11:45:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:45.431 [2024-07-13 11:45:20.062877] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:45.431 [2024-07-13 11:45:20.062987] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:45.431 [2024-07-13 11:45:20.126250] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:45.431 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:45.431 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:45.431 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.431 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:45.690 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:45.690 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:45.690 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:45.948 [2024-07-13 11:45:20.610413] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:45.948 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:45.948 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:45.948 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.948 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:46.207 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:46.207 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:46.207 11:45:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:33:46.465 [2024-07-13 11:45:21.038330] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:46.465 [2024-07-13 11:45:21.038382] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:33:46.465 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:46.465 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:46.465 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:46.465 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:46.724 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:46.724 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:46.724 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:33:46.724 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:33:46.724 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:46.724 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:46.982 BaseBdev2 00:33:46.982 11:45:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:33:46.982 11:45:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:33:46.982 11:45:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:46.982 11:45:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:46.982 11:45:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:46.982 11:45:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:46.982 11:45:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:47.241 11:45:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:47.500 [ 00:33:47.500 { 00:33:47.500 "name": "BaseBdev2", 00:33:47.500 "aliases": [ 00:33:47.500 "b537517b-5d51-461e-b800-2dff60bac482" 00:33:47.500 ], 00:33:47.500 "product_name": "Malloc disk", 00:33:47.500 "block_size": 512, 00:33:47.500 "num_blocks": 65536, 00:33:47.500 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:33:47.500 "assigned_rate_limits": { 00:33:47.500 "rw_ios_per_sec": 0, 00:33:47.500 "rw_mbytes_per_sec": 0, 00:33:47.500 "r_mbytes_per_sec": 0, 00:33:47.500 "w_mbytes_per_sec": 0 00:33:47.500 }, 00:33:47.500 "claimed": false, 00:33:47.500 "zoned": false, 00:33:47.500 "supported_io_types": { 00:33:47.500 "read": true, 00:33:47.500 "write": true, 00:33:47.500 "unmap": true, 00:33:47.500 "flush": true, 00:33:47.500 "reset": true, 00:33:47.500 "nvme_admin": false, 00:33:47.500 "nvme_io": false, 00:33:47.500 "nvme_io_md": false, 00:33:47.500 "write_zeroes": true, 00:33:47.500 "zcopy": true, 00:33:47.500 "get_zone_info": false, 00:33:47.500 "zone_management": false, 00:33:47.500 "zone_append": false, 00:33:47.500 "compare": false, 00:33:47.500 "compare_and_write": false, 00:33:47.500 "abort": true, 00:33:47.500 "seek_hole": false, 00:33:47.500 "seek_data": false, 00:33:47.500 "copy": true, 00:33:47.500 "nvme_iov_md": false 00:33:47.500 }, 00:33:47.500 "memory_domains": [ 00:33:47.500 { 00:33:47.500 "dma_device_id": "system", 00:33:47.500 "dma_device_type": 1 00:33:47.500 }, 00:33:47.500 { 00:33:47.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.500 "dma_device_type": 2 00:33:47.500 } 00:33:47.500 ], 00:33:47.500 "driver_specific": {} 00:33:47.500 } 00:33:47.500 ] 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:47.500 BaseBdev3 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:47.500 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:47.758 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:48.017 [ 00:33:48.017 { 00:33:48.017 "name": "BaseBdev3", 00:33:48.017 "aliases": [ 00:33:48.017 "2784d210-1157-459b-bfa0-f29d55bce37f" 00:33:48.017 ], 00:33:48.017 "product_name": "Malloc disk", 00:33:48.017 "block_size": 512, 00:33:48.017 "num_blocks": 65536, 00:33:48.017 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:33:48.017 "assigned_rate_limits": { 00:33:48.017 "rw_ios_per_sec": 0, 00:33:48.017 "rw_mbytes_per_sec": 0, 00:33:48.017 "r_mbytes_per_sec": 0, 00:33:48.017 "w_mbytes_per_sec": 0 00:33:48.017 }, 00:33:48.017 "claimed": false, 00:33:48.017 "zoned": false, 00:33:48.017 "supported_io_types": { 00:33:48.017 "read": true, 00:33:48.017 "write": true, 00:33:48.017 "unmap": true, 00:33:48.017 "flush": true, 00:33:48.017 "reset": true, 00:33:48.017 "nvme_admin": false, 00:33:48.017 "nvme_io": false, 00:33:48.017 "nvme_io_md": false, 00:33:48.017 "write_zeroes": true, 00:33:48.017 "zcopy": true, 00:33:48.017 "get_zone_info": false, 00:33:48.017 "zone_management": false, 00:33:48.017 "zone_append": false, 00:33:48.017 "compare": false, 00:33:48.017 "compare_and_write": false, 00:33:48.017 "abort": true, 00:33:48.017 "seek_hole": false, 00:33:48.017 "seek_data": false, 00:33:48.017 "copy": true, 00:33:48.017 "nvme_iov_md": false 00:33:48.017 }, 00:33:48.017 "memory_domains": [ 00:33:48.017 { 00:33:48.017 "dma_device_id": "system", 00:33:48.017 "dma_device_type": 1 00:33:48.017 }, 00:33:48.017 { 00:33:48.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:48.017 "dma_device_type": 2 00:33:48.017 } 00:33:48.017 ], 00:33:48.017 "driver_specific": {} 00:33:48.017 } 00:33:48.017 ] 00:33:48.017 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:48.017 11:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:48.017 11:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:48.017 11:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:48.275 BaseBdev4 00:33:48.276 11:45:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:33:48.276 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:33:48.276 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:48.276 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:48.276 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:48.276 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:48.276 11:45:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:48.533 11:45:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:48.791 [ 00:33:48.791 { 00:33:48.791 "name": "BaseBdev4", 00:33:48.791 "aliases": [ 00:33:48.791 "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3" 00:33:48.791 ], 00:33:48.791 "product_name": "Malloc disk", 00:33:48.791 "block_size": 512, 00:33:48.791 "num_blocks": 65536, 00:33:48.791 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:33:48.791 "assigned_rate_limits": { 00:33:48.791 "rw_ios_per_sec": 0, 00:33:48.791 "rw_mbytes_per_sec": 0, 00:33:48.791 "r_mbytes_per_sec": 0, 00:33:48.791 "w_mbytes_per_sec": 0 00:33:48.791 }, 00:33:48.791 "claimed": false, 00:33:48.791 "zoned": false, 00:33:48.791 "supported_io_types": { 00:33:48.791 "read": true, 00:33:48.791 "write": true, 00:33:48.791 "unmap": true, 00:33:48.791 "flush": true, 00:33:48.791 "reset": true, 00:33:48.791 "nvme_admin": false, 00:33:48.791 "nvme_io": false, 00:33:48.791 "nvme_io_md": false, 00:33:48.791 "write_zeroes": true, 00:33:48.791 "zcopy": true, 00:33:48.791 "get_zone_info": false, 00:33:48.791 "zone_management": false, 00:33:48.791 "zone_append": false, 00:33:48.791 "compare": false, 00:33:48.791 "compare_and_write": false, 00:33:48.791 "abort": true, 00:33:48.791 "seek_hole": false, 00:33:48.791 "seek_data": false, 00:33:48.791 "copy": true, 00:33:48.791 "nvme_iov_md": false 00:33:48.791 }, 00:33:48.791 "memory_domains": [ 00:33:48.791 { 00:33:48.791 "dma_device_id": "system", 00:33:48.791 "dma_device_type": 1 00:33:48.791 }, 00:33:48.791 { 00:33:48.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:48.791 "dma_device_type": 2 00:33:48.791 } 00:33:48.791 ], 00:33:48.791 "driver_specific": {} 00:33:48.791 } 00:33:48.791 ] 00:33:48.791 11:45:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:48.791 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:48.791 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:48.791 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:48.791 [2024-07-13 11:45:23.478205] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:48.791 [2024-07-13 11:45:23.478277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:48.791 [2024-07-13 11:45:23.478299] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:48.791 [2024-07-13 11:45:23.480116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:48.791 [2024-07-13 11:45:23.480181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:48.791 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.792 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:49.050 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:49.050 "name": "Existed_Raid", 00:33:49.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.050 "strip_size_kb": 64, 00:33:49.050 "state": "configuring", 00:33:49.050 "raid_level": "raid5f", 00:33:49.050 "superblock": false, 00:33:49.050 "num_base_bdevs": 4, 00:33:49.050 "num_base_bdevs_discovered": 3, 00:33:49.050 "num_base_bdevs_operational": 4, 00:33:49.050 "base_bdevs_list": [ 00:33:49.050 { 00:33:49.050 "name": "BaseBdev1", 00:33:49.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.050 "is_configured": false, 00:33:49.050 "data_offset": 0, 00:33:49.050 "data_size": 0 00:33:49.050 }, 00:33:49.050 { 00:33:49.050 "name": "BaseBdev2", 00:33:49.050 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:33:49.050 "is_configured": true, 00:33:49.050 "data_offset": 0, 00:33:49.050 "data_size": 65536 00:33:49.050 }, 00:33:49.050 { 00:33:49.050 "name": "BaseBdev3", 00:33:49.050 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:33:49.050 "is_configured": true, 00:33:49.050 "data_offset": 0, 00:33:49.050 "data_size": 65536 00:33:49.050 }, 00:33:49.050 { 00:33:49.050 "name": "BaseBdev4", 00:33:49.050 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:33:49.050 "is_configured": true, 00:33:49.050 "data_offset": 0, 00:33:49.050 "data_size": 65536 00:33:49.050 } 00:33:49.050 ] 00:33:49.050 }' 00:33:49.050 11:45:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:49.050 11:45:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:49.998 [2024-07-13 11:45:24.634489] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:49.998 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.280 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:50.280 "name": "Existed_Raid", 00:33:50.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.280 "strip_size_kb": 64, 00:33:50.280 "state": "configuring", 00:33:50.280 "raid_level": "raid5f", 00:33:50.280 "superblock": false, 00:33:50.280 "num_base_bdevs": 4, 00:33:50.280 "num_base_bdevs_discovered": 2, 00:33:50.280 "num_base_bdevs_operational": 4, 00:33:50.280 "base_bdevs_list": [ 00:33:50.280 { 00:33:50.280 "name": "BaseBdev1", 00:33:50.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.280 "is_configured": false, 00:33:50.280 "data_offset": 0, 00:33:50.280 "data_size": 0 00:33:50.280 }, 00:33:50.280 { 00:33:50.280 "name": null, 00:33:50.280 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:33:50.280 "is_configured": false, 00:33:50.280 "data_offset": 0, 00:33:50.280 "data_size": 65536 00:33:50.280 }, 00:33:50.280 { 00:33:50.280 "name": "BaseBdev3", 00:33:50.280 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:33:50.280 "is_configured": true, 00:33:50.280 "data_offset": 0, 00:33:50.280 "data_size": 65536 00:33:50.280 }, 00:33:50.280 { 00:33:50.280 "name": "BaseBdev4", 00:33:50.280 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:33:50.280 "is_configured": true, 00:33:50.280 "data_offset": 0, 00:33:50.280 "data_size": 65536 00:33:50.280 } 00:33:50.280 ] 00:33:50.280 }' 00:33:50.280 11:45:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:50.280 11:45:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:50.854 11:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.854 11:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:51.113 11:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:33:51.113 11:45:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:51.371 [2024-07-13 11:45:26.112285] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:51.371 BaseBdev1 00:33:51.371 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:33:51.371 11:45:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:33:51.371 11:45:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:51.371 11:45:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:51.371 11:45:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:51.371 11:45:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:51.371 11:45:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:51.629 11:45:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:51.887 [ 00:33:51.887 { 00:33:51.887 "name": "BaseBdev1", 00:33:51.887 "aliases": [ 00:33:51.887 "1ed379b7-d25e-4981-9542-1daa198658be" 00:33:51.887 ], 00:33:51.887 "product_name": "Malloc disk", 00:33:51.887 "block_size": 512, 00:33:51.887 "num_blocks": 65536, 00:33:51.887 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:33:51.887 "assigned_rate_limits": { 00:33:51.887 "rw_ios_per_sec": 0, 00:33:51.887 "rw_mbytes_per_sec": 0, 00:33:51.887 "r_mbytes_per_sec": 0, 00:33:51.887 "w_mbytes_per_sec": 0 00:33:51.887 }, 00:33:51.887 "claimed": true, 00:33:51.887 "claim_type": "exclusive_write", 00:33:51.887 "zoned": false, 00:33:51.887 "supported_io_types": { 00:33:51.887 "read": true, 00:33:51.887 "write": true, 00:33:51.887 "unmap": true, 00:33:51.887 "flush": true, 00:33:51.887 "reset": true, 00:33:51.887 "nvme_admin": false, 00:33:51.887 "nvme_io": false, 00:33:51.887 "nvme_io_md": false, 00:33:51.887 "write_zeroes": true, 00:33:51.887 "zcopy": true, 00:33:51.887 "get_zone_info": false, 00:33:51.887 "zone_management": false, 00:33:51.887 "zone_append": false, 00:33:51.887 "compare": false, 00:33:51.887 "compare_and_write": false, 00:33:51.887 "abort": true, 00:33:51.887 "seek_hole": false, 00:33:51.887 "seek_data": false, 00:33:51.887 "copy": true, 00:33:51.887 "nvme_iov_md": false 00:33:51.887 }, 00:33:51.887 "memory_domains": [ 00:33:51.887 { 00:33:51.887 "dma_device_id": "system", 00:33:51.887 "dma_device_type": 1 00:33:51.887 }, 00:33:51.887 { 00:33:51.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:51.887 "dma_device_type": 2 00:33:51.887 } 00:33:51.887 ], 00:33:51.887 "driver_specific": {} 00:33:51.887 } 00:33:51.887 ] 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.887 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:52.145 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:52.145 "name": "Existed_Raid", 00:33:52.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.145 "strip_size_kb": 64, 00:33:52.145 "state": "configuring", 00:33:52.145 "raid_level": "raid5f", 00:33:52.145 "superblock": false, 00:33:52.145 "num_base_bdevs": 4, 00:33:52.145 "num_base_bdevs_discovered": 3, 00:33:52.145 "num_base_bdevs_operational": 4, 00:33:52.145 "base_bdevs_list": [ 00:33:52.145 { 00:33:52.145 "name": "BaseBdev1", 00:33:52.145 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:33:52.145 "is_configured": true, 00:33:52.145 "data_offset": 0, 00:33:52.145 "data_size": 65536 00:33:52.145 }, 00:33:52.145 { 00:33:52.145 "name": null, 00:33:52.145 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:33:52.145 "is_configured": false, 00:33:52.145 "data_offset": 0, 00:33:52.145 "data_size": 65536 00:33:52.145 }, 00:33:52.145 { 00:33:52.145 "name": "BaseBdev3", 00:33:52.145 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:33:52.145 "is_configured": true, 00:33:52.145 "data_offset": 0, 00:33:52.145 "data_size": 65536 00:33:52.145 }, 00:33:52.145 { 00:33:52.145 "name": "BaseBdev4", 00:33:52.145 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:33:52.145 "is_configured": true, 00:33:52.145 "data_offset": 0, 00:33:52.145 "data_size": 65536 00:33:52.145 } 00:33:52.145 ] 00:33:52.145 }' 00:33:52.145 11:45:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:52.145 11:45:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.079 11:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.079 11:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:53.079 11:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:33:53.079 11:45:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:33:53.337 [2024-07-13 11:45:27.992739] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.337 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:53.595 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:53.595 "name": "Existed_Raid", 00:33:53.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:53.595 "strip_size_kb": 64, 00:33:53.595 "state": "configuring", 00:33:53.595 "raid_level": "raid5f", 00:33:53.595 "superblock": false, 00:33:53.595 "num_base_bdevs": 4, 00:33:53.595 "num_base_bdevs_discovered": 2, 00:33:53.595 "num_base_bdevs_operational": 4, 00:33:53.595 "base_bdevs_list": [ 00:33:53.595 { 00:33:53.595 "name": "BaseBdev1", 00:33:53.595 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:33:53.595 "is_configured": true, 00:33:53.595 "data_offset": 0, 00:33:53.595 "data_size": 65536 00:33:53.595 }, 00:33:53.595 { 00:33:53.595 "name": null, 00:33:53.595 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:33:53.595 "is_configured": false, 00:33:53.595 "data_offset": 0, 00:33:53.595 "data_size": 65536 00:33:53.595 }, 00:33:53.595 { 00:33:53.595 "name": null, 00:33:53.595 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:33:53.595 "is_configured": false, 00:33:53.595 "data_offset": 0, 00:33:53.595 "data_size": 65536 00:33:53.595 }, 00:33:53.595 { 00:33:53.595 "name": "BaseBdev4", 00:33:53.595 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:33:53.595 "is_configured": true, 00:33:53.595 "data_offset": 0, 00:33:53.595 "data_size": 65536 00:33:53.595 } 00:33:53.595 ] 00:33:53.595 }' 00:33:53.595 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:53.595 11:45:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.160 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.160 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:54.419 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:33:54.419 11:45:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:54.677 [2024-07-13 11:45:29.221016] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:54.677 "name": "Existed_Raid", 00:33:54.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.677 "strip_size_kb": 64, 00:33:54.677 "state": "configuring", 00:33:54.677 "raid_level": "raid5f", 00:33:54.677 "superblock": false, 00:33:54.677 "num_base_bdevs": 4, 00:33:54.677 "num_base_bdevs_discovered": 3, 00:33:54.677 "num_base_bdevs_operational": 4, 00:33:54.677 "base_bdevs_list": [ 00:33:54.677 { 00:33:54.677 "name": "BaseBdev1", 00:33:54.677 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:33:54.677 "is_configured": true, 00:33:54.677 "data_offset": 0, 00:33:54.677 "data_size": 65536 00:33:54.677 }, 00:33:54.677 { 00:33:54.677 "name": null, 00:33:54.677 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:33:54.677 "is_configured": false, 00:33:54.677 "data_offset": 0, 00:33:54.677 "data_size": 65536 00:33:54.677 }, 00:33:54.677 { 00:33:54.677 "name": "BaseBdev3", 00:33:54.677 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:33:54.677 "is_configured": true, 00:33:54.677 "data_offset": 0, 00:33:54.677 "data_size": 65536 00:33:54.677 }, 00:33:54.677 { 00:33:54.677 "name": "BaseBdev4", 00:33:54.677 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:33:54.677 "is_configured": true, 00:33:54.677 "data_offset": 0, 00:33:54.677 "data_size": 65536 00:33:54.677 } 00:33:54.677 ] 00:33:54.677 }' 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:54.677 11:45:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.612 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:55.612 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.612 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:33:55.612 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:55.870 [2024-07-13 11:45:30.501264] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.870 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.128 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:56.128 "name": "Existed_Raid", 00:33:56.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.128 "strip_size_kb": 64, 00:33:56.128 "state": "configuring", 00:33:56.128 "raid_level": "raid5f", 00:33:56.128 "superblock": false, 00:33:56.128 "num_base_bdevs": 4, 00:33:56.128 "num_base_bdevs_discovered": 2, 00:33:56.128 "num_base_bdevs_operational": 4, 00:33:56.128 "base_bdevs_list": [ 00:33:56.128 { 00:33:56.128 "name": null, 00:33:56.128 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:33:56.128 "is_configured": false, 00:33:56.128 "data_offset": 0, 00:33:56.128 "data_size": 65536 00:33:56.128 }, 00:33:56.128 { 00:33:56.128 "name": null, 00:33:56.128 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:33:56.128 "is_configured": false, 00:33:56.128 "data_offset": 0, 00:33:56.128 "data_size": 65536 00:33:56.128 }, 00:33:56.128 { 00:33:56.128 "name": "BaseBdev3", 00:33:56.128 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:33:56.128 "is_configured": true, 00:33:56.128 "data_offset": 0, 00:33:56.128 "data_size": 65536 00:33:56.128 }, 00:33:56.128 { 00:33:56.128 "name": "BaseBdev4", 00:33:56.128 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:33:56.128 "is_configured": true, 00:33:56.128 "data_offset": 0, 00:33:56.128 "data_size": 65536 00:33:56.128 } 00:33:56.128 ] 00:33:56.128 }' 00:33:56.128 11:45:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:56.128 11:45:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.693 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.693 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:56.952 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:33:56.952 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:57.211 [2024-07-13 11:45:31.833772] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.211 11:45:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:57.469 11:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:57.469 "name": "Existed_Raid", 00:33:57.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:57.469 "strip_size_kb": 64, 00:33:57.469 "state": "configuring", 00:33:57.469 "raid_level": "raid5f", 00:33:57.469 "superblock": false, 00:33:57.469 "num_base_bdevs": 4, 00:33:57.469 "num_base_bdevs_discovered": 3, 00:33:57.469 "num_base_bdevs_operational": 4, 00:33:57.469 "base_bdevs_list": [ 00:33:57.469 { 00:33:57.469 "name": null, 00:33:57.469 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:33:57.469 "is_configured": false, 00:33:57.469 "data_offset": 0, 00:33:57.469 "data_size": 65536 00:33:57.469 }, 00:33:57.469 { 00:33:57.469 "name": "BaseBdev2", 00:33:57.470 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:33:57.470 "is_configured": true, 00:33:57.470 "data_offset": 0, 00:33:57.470 "data_size": 65536 00:33:57.470 }, 00:33:57.470 { 00:33:57.470 "name": "BaseBdev3", 00:33:57.470 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:33:57.470 "is_configured": true, 00:33:57.470 "data_offset": 0, 00:33:57.470 "data_size": 65536 00:33:57.470 }, 00:33:57.470 { 00:33:57.470 "name": "BaseBdev4", 00:33:57.470 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:33:57.470 "is_configured": true, 00:33:57.470 "data_offset": 0, 00:33:57.470 "data_size": 65536 00:33:57.470 } 00:33:57.470 ] 00:33:57.470 }' 00:33:57.470 11:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:57.470 11:45:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.036 11:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.036 11:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:58.301 11:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:33:58.301 11:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:58.302 11:45:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.302 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1ed379b7-d25e-4981-9542-1daa198658be 00:33:58.560 [2024-07-13 11:45:33.293776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:58.560 [2024-07-13 11:45:33.293828] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:33:58.560 [2024-07-13 11:45:33.293838] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:58.560 [2024-07-13 11:45:33.293951] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:58.560 [2024-07-13 11:45:33.299155] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:33:58.560 [2024-07-13 11:45:33.299181] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:33:58.560 [2024-07-13 11:45:33.299444] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:58.560 NewBaseBdev 00:33:58.818 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:33:58.818 11:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:33:58.818 11:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:58.818 11:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local i 00:33:58.818 11:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:58.818 11:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:58.818 11:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:58.818 11:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:59.076 [ 00:33:59.076 { 00:33:59.076 "name": "NewBaseBdev", 00:33:59.076 "aliases": [ 00:33:59.076 "1ed379b7-d25e-4981-9542-1daa198658be" 00:33:59.076 ], 00:33:59.076 "product_name": "Malloc disk", 00:33:59.076 "block_size": 512, 00:33:59.076 "num_blocks": 65536, 00:33:59.076 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:33:59.076 "assigned_rate_limits": { 00:33:59.076 "rw_ios_per_sec": 0, 00:33:59.076 "rw_mbytes_per_sec": 0, 00:33:59.076 "r_mbytes_per_sec": 0, 00:33:59.076 "w_mbytes_per_sec": 0 00:33:59.076 }, 00:33:59.076 "claimed": true, 00:33:59.076 "claim_type": "exclusive_write", 00:33:59.076 "zoned": false, 00:33:59.076 "supported_io_types": { 00:33:59.076 "read": true, 00:33:59.076 "write": true, 00:33:59.076 "unmap": true, 00:33:59.076 "flush": true, 00:33:59.076 "reset": true, 00:33:59.076 "nvme_admin": false, 00:33:59.076 "nvme_io": false, 00:33:59.076 "nvme_io_md": false, 00:33:59.076 "write_zeroes": true, 00:33:59.076 "zcopy": true, 00:33:59.076 "get_zone_info": false, 00:33:59.076 "zone_management": false, 00:33:59.076 "zone_append": false, 00:33:59.076 "compare": false, 00:33:59.076 "compare_and_write": false, 00:33:59.076 "abort": true, 00:33:59.076 "seek_hole": false, 00:33:59.076 "seek_data": false, 00:33:59.076 "copy": true, 00:33:59.076 "nvme_iov_md": false 00:33:59.076 }, 00:33:59.076 "memory_domains": [ 00:33:59.076 { 00:33:59.076 "dma_device_id": "system", 00:33:59.076 "dma_device_type": 1 00:33:59.076 }, 00:33:59.076 { 00:33:59.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:59.076 "dma_device_type": 2 00:33:59.076 } 00:33:59.076 ], 00:33:59.076 "driver_specific": {} 00:33:59.076 } 00:33:59.076 ] 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:59.076 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:59.335 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:59.335 "name": "Existed_Raid", 00:33:59.335 "uuid": "127b82d2-3343-48f2-a5f6-ab72179f8227", 00:33:59.335 "strip_size_kb": 64, 00:33:59.335 "state": "online", 00:33:59.335 "raid_level": "raid5f", 00:33:59.335 "superblock": false, 00:33:59.335 "num_base_bdevs": 4, 00:33:59.335 "num_base_bdevs_discovered": 4, 00:33:59.335 "num_base_bdevs_operational": 4, 00:33:59.335 "base_bdevs_list": [ 00:33:59.335 { 00:33:59.335 "name": "NewBaseBdev", 00:33:59.335 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:33:59.335 "is_configured": true, 00:33:59.335 "data_offset": 0, 00:33:59.335 "data_size": 65536 00:33:59.335 }, 00:33:59.335 { 00:33:59.335 "name": "BaseBdev2", 00:33:59.335 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:33:59.335 "is_configured": true, 00:33:59.335 "data_offset": 0, 00:33:59.335 "data_size": 65536 00:33:59.335 }, 00:33:59.335 { 00:33:59.335 "name": "BaseBdev3", 00:33:59.335 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:33:59.335 "is_configured": true, 00:33:59.335 "data_offset": 0, 00:33:59.335 "data_size": 65536 00:33:59.335 }, 00:33:59.335 { 00:33:59.335 "name": "BaseBdev4", 00:33:59.335 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:33:59.335 "is_configured": true, 00:33:59.335 "data_offset": 0, 00:33:59.335 "data_size": 65536 00:33:59.335 } 00:33:59.335 ] 00:33:59.335 }' 00:33:59.335 11:45:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:59.335 11:45:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:59.901 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:33:59.901 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:59.901 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:59.901 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:59.901 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:59.901 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:59.901 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:59.901 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:00.160 [2024-07-13 11:45:34.663735] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:00.160 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:00.160 "name": "Existed_Raid", 00:34:00.160 "aliases": [ 00:34:00.160 "127b82d2-3343-48f2-a5f6-ab72179f8227" 00:34:00.160 ], 00:34:00.160 "product_name": "Raid Volume", 00:34:00.160 "block_size": 512, 00:34:00.160 "num_blocks": 196608, 00:34:00.160 "uuid": "127b82d2-3343-48f2-a5f6-ab72179f8227", 00:34:00.160 "assigned_rate_limits": { 00:34:00.160 "rw_ios_per_sec": 0, 00:34:00.160 "rw_mbytes_per_sec": 0, 00:34:00.160 "r_mbytes_per_sec": 0, 00:34:00.160 "w_mbytes_per_sec": 0 00:34:00.160 }, 00:34:00.160 "claimed": false, 00:34:00.160 "zoned": false, 00:34:00.160 "supported_io_types": { 00:34:00.160 "read": true, 00:34:00.160 "write": true, 00:34:00.160 "unmap": false, 00:34:00.160 "flush": false, 00:34:00.160 "reset": true, 00:34:00.160 "nvme_admin": false, 00:34:00.160 "nvme_io": false, 00:34:00.160 "nvme_io_md": false, 00:34:00.160 "write_zeroes": true, 00:34:00.160 "zcopy": false, 00:34:00.160 "get_zone_info": false, 00:34:00.160 "zone_management": false, 00:34:00.160 "zone_append": false, 00:34:00.160 "compare": false, 00:34:00.160 "compare_and_write": false, 00:34:00.160 "abort": false, 00:34:00.160 "seek_hole": false, 00:34:00.160 "seek_data": false, 00:34:00.160 "copy": false, 00:34:00.160 "nvme_iov_md": false 00:34:00.160 }, 00:34:00.160 "driver_specific": { 00:34:00.160 "raid": { 00:34:00.160 "uuid": "127b82d2-3343-48f2-a5f6-ab72179f8227", 00:34:00.160 "strip_size_kb": 64, 00:34:00.160 "state": "online", 00:34:00.160 "raid_level": "raid5f", 00:34:00.160 "superblock": false, 00:34:00.160 "num_base_bdevs": 4, 00:34:00.160 "num_base_bdevs_discovered": 4, 00:34:00.160 "num_base_bdevs_operational": 4, 00:34:00.160 "base_bdevs_list": [ 00:34:00.160 { 00:34:00.160 "name": "NewBaseBdev", 00:34:00.160 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:34:00.160 "is_configured": true, 00:34:00.160 "data_offset": 0, 00:34:00.160 "data_size": 65536 00:34:00.160 }, 00:34:00.160 { 00:34:00.160 "name": "BaseBdev2", 00:34:00.160 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:34:00.160 "is_configured": true, 00:34:00.160 "data_offset": 0, 00:34:00.160 "data_size": 65536 00:34:00.160 }, 00:34:00.160 { 00:34:00.160 "name": "BaseBdev3", 00:34:00.160 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:34:00.160 "is_configured": true, 00:34:00.160 "data_offset": 0, 00:34:00.160 "data_size": 65536 00:34:00.160 }, 00:34:00.160 { 00:34:00.160 "name": "BaseBdev4", 00:34:00.160 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:34:00.160 "is_configured": true, 00:34:00.160 "data_offset": 0, 00:34:00.160 "data_size": 65536 00:34:00.160 } 00:34:00.160 ] 00:34:00.160 } 00:34:00.160 } 00:34:00.160 }' 00:34:00.160 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:00.160 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:34:00.160 BaseBdev2 00:34:00.160 BaseBdev3 00:34:00.160 BaseBdev4' 00:34:00.160 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:00.160 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:34:00.160 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:00.419 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:00.419 "name": "NewBaseBdev", 00:34:00.419 "aliases": [ 00:34:00.419 "1ed379b7-d25e-4981-9542-1daa198658be" 00:34:00.419 ], 00:34:00.419 "product_name": "Malloc disk", 00:34:00.419 "block_size": 512, 00:34:00.419 "num_blocks": 65536, 00:34:00.419 "uuid": "1ed379b7-d25e-4981-9542-1daa198658be", 00:34:00.419 "assigned_rate_limits": { 00:34:00.419 "rw_ios_per_sec": 0, 00:34:00.419 "rw_mbytes_per_sec": 0, 00:34:00.419 "r_mbytes_per_sec": 0, 00:34:00.419 "w_mbytes_per_sec": 0 00:34:00.419 }, 00:34:00.419 "claimed": true, 00:34:00.419 "claim_type": "exclusive_write", 00:34:00.419 "zoned": false, 00:34:00.419 "supported_io_types": { 00:34:00.419 "read": true, 00:34:00.419 "write": true, 00:34:00.419 "unmap": true, 00:34:00.419 "flush": true, 00:34:00.419 "reset": true, 00:34:00.419 "nvme_admin": false, 00:34:00.419 "nvme_io": false, 00:34:00.419 "nvme_io_md": false, 00:34:00.419 "write_zeroes": true, 00:34:00.419 "zcopy": true, 00:34:00.419 "get_zone_info": false, 00:34:00.419 "zone_management": false, 00:34:00.419 "zone_append": false, 00:34:00.419 "compare": false, 00:34:00.419 "compare_and_write": false, 00:34:00.419 "abort": true, 00:34:00.419 "seek_hole": false, 00:34:00.419 "seek_data": false, 00:34:00.419 "copy": true, 00:34:00.419 "nvme_iov_md": false 00:34:00.419 }, 00:34:00.419 "memory_domains": [ 00:34:00.419 { 00:34:00.419 "dma_device_id": "system", 00:34:00.419 "dma_device_type": 1 00:34:00.419 }, 00:34:00.419 { 00:34:00.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.419 "dma_device_type": 2 00:34:00.419 } 00:34:00.419 ], 00:34:00.419 "driver_specific": {} 00:34:00.419 }' 00:34:00.419 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.419 11:45:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.419 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:00.419 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:00.419 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:00.419 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:00.419 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:00.419 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:00.677 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:00.677 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:00.677 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:00.678 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:00.678 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:00.678 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:00.678 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:00.936 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:00.936 "name": "BaseBdev2", 00:34:00.936 "aliases": [ 00:34:00.936 "b537517b-5d51-461e-b800-2dff60bac482" 00:34:00.936 ], 00:34:00.936 "product_name": "Malloc disk", 00:34:00.936 "block_size": 512, 00:34:00.936 "num_blocks": 65536, 00:34:00.936 "uuid": "b537517b-5d51-461e-b800-2dff60bac482", 00:34:00.936 "assigned_rate_limits": { 00:34:00.936 "rw_ios_per_sec": 0, 00:34:00.936 "rw_mbytes_per_sec": 0, 00:34:00.936 "r_mbytes_per_sec": 0, 00:34:00.936 "w_mbytes_per_sec": 0 00:34:00.936 }, 00:34:00.936 "claimed": true, 00:34:00.936 "claim_type": "exclusive_write", 00:34:00.936 "zoned": false, 00:34:00.936 "supported_io_types": { 00:34:00.936 "read": true, 00:34:00.936 "write": true, 00:34:00.936 "unmap": true, 00:34:00.936 "flush": true, 00:34:00.936 "reset": true, 00:34:00.936 "nvme_admin": false, 00:34:00.936 "nvme_io": false, 00:34:00.936 "nvme_io_md": false, 00:34:00.936 "write_zeroes": true, 00:34:00.936 "zcopy": true, 00:34:00.936 "get_zone_info": false, 00:34:00.936 "zone_management": false, 00:34:00.936 "zone_append": false, 00:34:00.936 "compare": false, 00:34:00.936 "compare_and_write": false, 00:34:00.936 "abort": true, 00:34:00.936 "seek_hole": false, 00:34:00.936 "seek_data": false, 00:34:00.936 "copy": true, 00:34:00.936 "nvme_iov_md": false 00:34:00.936 }, 00:34:00.936 "memory_domains": [ 00:34:00.936 { 00:34:00.936 "dma_device_id": "system", 00:34:00.936 "dma_device_type": 1 00:34:00.936 }, 00:34:00.936 { 00:34:00.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.936 "dma_device_type": 2 00:34:00.936 } 00:34:00.936 ], 00:34:00.936 "driver_specific": {} 00:34:00.936 }' 00:34:00.936 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.936 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.936 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:00.936 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:00.936 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:01.195 11:45:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:01.453 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:01.453 "name": "BaseBdev3", 00:34:01.454 "aliases": [ 00:34:01.454 "2784d210-1157-459b-bfa0-f29d55bce37f" 00:34:01.454 ], 00:34:01.454 "product_name": "Malloc disk", 00:34:01.454 "block_size": 512, 00:34:01.454 "num_blocks": 65536, 00:34:01.454 "uuid": "2784d210-1157-459b-bfa0-f29d55bce37f", 00:34:01.454 "assigned_rate_limits": { 00:34:01.454 "rw_ios_per_sec": 0, 00:34:01.454 "rw_mbytes_per_sec": 0, 00:34:01.454 "r_mbytes_per_sec": 0, 00:34:01.454 "w_mbytes_per_sec": 0 00:34:01.454 }, 00:34:01.454 "claimed": true, 00:34:01.454 "claim_type": "exclusive_write", 00:34:01.454 "zoned": false, 00:34:01.454 "supported_io_types": { 00:34:01.454 "read": true, 00:34:01.454 "write": true, 00:34:01.454 "unmap": true, 00:34:01.454 "flush": true, 00:34:01.454 "reset": true, 00:34:01.454 "nvme_admin": false, 00:34:01.454 "nvme_io": false, 00:34:01.454 "nvme_io_md": false, 00:34:01.454 "write_zeroes": true, 00:34:01.454 "zcopy": true, 00:34:01.454 "get_zone_info": false, 00:34:01.454 "zone_management": false, 00:34:01.454 "zone_append": false, 00:34:01.454 "compare": false, 00:34:01.454 "compare_and_write": false, 00:34:01.454 "abort": true, 00:34:01.454 "seek_hole": false, 00:34:01.454 "seek_data": false, 00:34:01.454 "copy": true, 00:34:01.454 "nvme_iov_md": false 00:34:01.454 }, 00:34:01.454 "memory_domains": [ 00:34:01.454 { 00:34:01.454 "dma_device_id": "system", 00:34:01.454 "dma_device_type": 1 00:34:01.454 }, 00:34:01.454 { 00:34:01.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:01.454 "dma_device_type": 2 00:34:01.454 } 00:34:01.454 ], 00:34:01.454 "driver_specific": {} 00:34:01.454 }' 00:34:01.454 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:01.454 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:01.454 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:01.454 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:01.712 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:01.712 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:01.712 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:01.712 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:01.712 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:01.712 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:01.712 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:01.971 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:01.971 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:01.971 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:34:01.971 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:02.230 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:02.230 "name": "BaseBdev4", 00:34:02.230 "aliases": [ 00:34:02.230 "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3" 00:34:02.230 ], 00:34:02.230 "product_name": "Malloc disk", 00:34:02.230 "block_size": 512, 00:34:02.230 "num_blocks": 65536, 00:34:02.230 "uuid": "e7f46ddd-ae21-4fde-9ee5-a8323143a0f3", 00:34:02.230 "assigned_rate_limits": { 00:34:02.230 "rw_ios_per_sec": 0, 00:34:02.230 "rw_mbytes_per_sec": 0, 00:34:02.230 "r_mbytes_per_sec": 0, 00:34:02.230 "w_mbytes_per_sec": 0 00:34:02.230 }, 00:34:02.230 "claimed": true, 00:34:02.230 "claim_type": "exclusive_write", 00:34:02.230 "zoned": false, 00:34:02.230 "supported_io_types": { 00:34:02.230 "read": true, 00:34:02.230 "write": true, 00:34:02.230 "unmap": true, 00:34:02.230 "flush": true, 00:34:02.230 "reset": true, 00:34:02.230 "nvme_admin": false, 00:34:02.230 "nvme_io": false, 00:34:02.230 "nvme_io_md": false, 00:34:02.230 "write_zeroes": true, 00:34:02.230 "zcopy": true, 00:34:02.230 "get_zone_info": false, 00:34:02.230 "zone_management": false, 00:34:02.230 "zone_append": false, 00:34:02.230 "compare": false, 00:34:02.230 "compare_and_write": false, 00:34:02.230 "abort": true, 00:34:02.230 "seek_hole": false, 00:34:02.230 "seek_data": false, 00:34:02.230 "copy": true, 00:34:02.230 "nvme_iov_md": false 00:34:02.230 }, 00:34:02.230 "memory_domains": [ 00:34:02.230 { 00:34:02.230 "dma_device_id": "system", 00:34:02.230 "dma_device_type": 1 00:34:02.230 }, 00:34:02.230 { 00:34:02.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:02.230 "dma_device_type": 2 00:34:02.230 } 00:34:02.230 ], 00:34:02.230 "driver_specific": {} 00:34:02.230 }' 00:34:02.230 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:02.230 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:02.230 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:02.230 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:02.230 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:02.230 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:02.230 11:45:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:02.489 11:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:02.489 11:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:02.489 11:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:02.489 11:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:02.489 11:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:02.489 11:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:02.748 [2024-07-13 11:45:37.408170] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:02.748 [2024-07-13 11:45:37.408206] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:02.748 [2024-07-13 11:45:37.408286] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:02.748 [2024-07-13 11:45:37.408591] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:02.748 [2024-07-13 11:45:37.408611] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 155411 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 155411 ']' 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # kill -0 155411 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # uname 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155411 00:34:02.748 killing process with pid 155411 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155411' 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@967 -- # kill 155411 00:34:02.748 11:45:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # wait 155411 00:34:02.748 [2024-07-13 11:45:37.444318] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:03.007 [2024-07-13 11:45:37.713705] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:04.385 ************************************ 00:34:04.385 END TEST raid5f_state_function_test 00:34:04.385 ************************************ 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:34:04.385 00:34:04.385 real 0m33.199s 00:34:04.385 user 1m2.491s 00:34:04.385 sys 0m3.524s 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:04.385 11:45:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:04.385 11:45:38 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:34:04.385 11:45:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:34:04.385 11:45:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:04.385 11:45:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:04.385 ************************************ 00:34:04.385 START TEST raid5f_state_function_test_sb 00:34:04.385 ************************************ 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid5f 4 true 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=156546 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 156546' 00:34:04.385 Process raid pid: 156546 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 156546 /var/tmp/spdk-raid.sock 00:34:04.385 11:45:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:04.386 11:45:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 156546 ']' 00:34:04.386 11:45:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:04.386 11:45:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:04.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:04.386 11:45:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:04.386 11:45:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:04.386 11:45:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.386 [2024-07-13 11:45:38.871387] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:04.386 [2024-07-13 11:45:38.871609] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.386 [2024-07-13 11:45:39.044703] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.645 [2024-07-13 11:45:39.273430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.904 [2024-07-13 11:45:39.461971] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:05.162 11:45:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:05.162 11:45:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:34:05.162 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:05.162 [2024-07-13 11:45:39.859957] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:05.162 [2024-07-13 11:45:39.860043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:05.163 [2024-07-13 11:45:39.860057] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:05.163 [2024-07-13 11:45:39.860081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:05.163 [2024-07-13 11:45:39.860090] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:05.163 [2024-07-13 11:45:39.860105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:05.163 [2024-07-13 11:45:39.860112] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:05.163 [2024-07-13 11:45:39.860133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.163 11:45:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:05.421 11:45:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:05.421 "name": "Existed_Raid", 00:34:05.421 "uuid": "69a12de3-bf3a-408b-a201-f36386754b46", 00:34:05.421 "strip_size_kb": 64, 00:34:05.421 "state": "configuring", 00:34:05.421 "raid_level": "raid5f", 00:34:05.421 "superblock": true, 00:34:05.421 "num_base_bdevs": 4, 00:34:05.421 "num_base_bdevs_discovered": 0, 00:34:05.421 "num_base_bdevs_operational": 4, 00:34:05.421 "base_bdevs_list": [ 00:34:05.421 { 00:34:05.421 "name": "BaseBdev1", 00:34:05.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.421 "is_configured": false, 00:34:05.421 "data_offset": 0, 00:34:05.421 "data_size": 0 00:34:05.421 }, 00:34:05.421 { 00:34:05.421 "name": "BaseBdev2", 00:34:05.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.421 "is_configured": false, 00:34:05.421 "data_offset": 0, 00:34:05.421 "data_size": 0 00:34:05.421 }, 00:34:05.421 { 00:34:05.421 "name": "BaseBdev3", 00:34:05.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.421 "is_configured": false, 00:34:05.421 "data_offset": 0, 00:34:05.421 "data_size": 0 00:34:05.421 }, 00:34:05.421 { 00:34:05.421 "name": "BaseBdev4", 00:34:05.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.421 "is_configured": false, 00:34:05.421 "data_offset": 0, 00:34:05.421 "data_size": 0 00:34:05.421 } 00:34:05.421 ] 00:34:05.421 }' 00:34:05.421 11:45:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:05.421 11:45:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.354 11:45:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:06.354 [2024-07-13 11:45:40.943992] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:06.354 [2024-07-13 11:45:40.944021] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:34:06.354 11:45:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:06.612 [2024-07-13 11:45:41.124047] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:06.612 [2024-07-13 11:45:41.124093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:06.612 [2024-07-13 11:45:41.124104] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:06.612 [2024-07-13 11:45:41.124144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:06.612 [2024-07-13 11:45:41.124154] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:06.612 [2024-07-13 11:45:41.124183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:06.612 [2024-07-13 11:45:41.124191] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:06.612 [2024-07-13 11:45:41.124218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:06.612 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:06.612 [2024-07-13 11:45:41.337530] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:06.612 BaseBdev1 00:34:06.612 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:06.612 11:45:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:06.612 11:45:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:06.612 11:45:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:06.612 11:45:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:06.612 11:45:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:06.612 11:45:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:06.870 11:45:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:07.128 [ 00:34:07.128 { 00:34:07.128 "name": "BaseBdev1", 00:34:07.128 "aliases": [ 00:34:07.128 "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0" 00:34:07.128 ], 00:34:07.128 "product_name": "Malloc disk", 00:34:07.128 "block_size": 512, 00:34:07.128 "num_blocks": 65536, 00:34:07.128 "uuid": "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0", 00:34:07.128 "assigned_rate_limits": { 00:34:07.128 "rw_ios_per_sec": 0, 00:34:07.128 "rw_mbytes_per_sec": 0, 00:34:07.128 "r_mbytes_per_sec": 0, 00:34:07.128 "w_mbytes_per_sec": 0 00:34:07.128 }, 00:34:07.128 "claimed": true, 00:34:07.128 "claim_type": "exclusive_write", 00:34:07.128 "zoned": false, 00:34:07.128 "supported_io_types": { 00:34:07.128 "read": true, 00:34:07.128 "write": true, 00:34:07.128 "unmap": true, 00:34:07.128 "flush": true, 00:34:07.128 "reset": true, 00:34:07.128 "nvme_admin": false, 00:34:07.128 "nvme_io": false, 00:34:07.128 "nvme_io_md": false, 00:34:07.128 "write_zeroes": true, 00:34:07.128 "zcopy": true, 00:34:07.128 "get_zone_info": false, 00:34:07.128 "zone_management": false, 00:34:07.128 "zone_append": false, 00:34:07.128 "compare": false, 00:34:07.128 "compare_and_write": false, 00:34:07.128 "abort": true, 00:34:07.128 "seek_hole": false, 00:34:07.128 "seek_data": false, 00:34:07.128 "copy": true, 00:34:07.128 "nvme_iov_md": false 00:34:07.128 }, 00:34:07.128 "memory_domains": [ 00:34:07.128 { 00:34:07.129 "dma_device_id": "system", 00:34:07.129 "dma_device_type": 1 00:34:07.129 }, 00:34:07.129 { 00:34:07.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:07.129 "dma_device_type": 2 00:34:07.129 } 00:34:07.129 ], 00:34:07.129 "driver_specific": {} 00:34:07.129 } 00:34:07.129 ] 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.129 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:07.387 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:07.387 "name": "Existed_Raid", 00:34:07.387 "uuid": "a21c027e-a99e-4869-8b16-ebc6375284cb", 00:34:07.387 "strip_size_kb": 64, 00:34:07.387 "state": "configuring", 00:34:07.387 "raid_level": "raid5f", 00:34:07.387 "superblock": true, 00:34:07.387 "num_base_bdevs": 4, 00:34:07.387 "num_base_bdevs_discovered": 1, 00:34:07.387 "num_base_bdevs_operational": 4, 00:34:07.387 "base_bdevs_list": [ 00:34:07.387 { 00:34:07.387 "name": "BaseBdev1", 00:34:07.387 "uuid": "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0", 00:34:07.387 "is_configured": true, 00:34:07.387 "data_offset": 2048, 00:34:07.387 "data_size": 63488 00:34:07.387 }, 00:34:07.387 { 00:34:07.387 "name": "BaseBdev2", 00:34:07.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.387 "is_configured": false, 00:34:07.387 "data_offset": 0, 00:34:07.387 "data_size": 0 00:34:07.387 }, 00:34:07.387 { 00:34:07.387 "name": "BaseBdev3", 00:34:07.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.387 "is_configured": false, 00:34:07.387 "data_offset": 0, 00:34:07.387 "data_size": 0 00:34:07.387 }, 00:34:07.387 { 00:34:07.387 "name": "BaseBdev4", 00:34:07.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:07.387 "is_configured": false, 00:34:07.387 "data_offset": 0, 00:34:07.387 "data_size": 0 00:34:07.387 } 00:34:07.387 ] 00:34:07.387 }' 00:34:07.387 11:45:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:07.387 11:45:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.953 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:08.210 [2024-07-13 11:45:42.725865] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:08.210 [2024-07-13 11:45:42.725905] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:08.210 [2024-07-13 11:45:42.905898] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:08.210 [2024-07-13 11:45:42.907753] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:08.210 [2024-07-13 11:45:42.907810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:08.210 [2024-07-13 11:45:42.907822] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:08.210 [2024-07-13 11:45:42.907847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:08.210 [2024-07-13 11:45:42.907856] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:08.210 [2024-07-13 11:45:42.907882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:08.210 11:45:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.468 11:45:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:08.468 "name": "Existed_Raid", 00:34:08.468 "uuid": "4e4fb223-5a0a-4cf1-be7c-7aaf00078936", 00:34:08.468 "strip_size_kb": 64, 00:34:08.468 "state": "configuring", 00:34:08.468 "raid_level": "raid5f", 00:34:08.468 "superblock": true, 00:34:08.468 "num_base_bdevs": 4, 00:34:08.468 "num_base_bdevs_discovered": 1, 00:34:08.468 "num_base_bdevs_operational": 4, 00:34:08.468 "base_bdevs_list": [ 00:34:08.468 { 00:34:08.468 "name": "BaseBdev1", 00:34:08.468 "uuid": "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0", 00:34:08.468 "is_configured": true, 00:34:08.468 "data_offset": 2048, 00:34:08.468 "data_size": 63488 00:34:08.468 }, 00:34:08.468 { 00:34:08.468 "name": "BaseBdev2", 00:34:08.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.468 "is_configured": false, 00:34:08.468 "data_offset": 0, 00:34:08.468 "data_size": 0 00:34:08.468 }, 00:34:08.468 { 00:34:08.468 "name": "BaseBdev3", 00:34:08.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.468 "is_configured": false, 00:34:08.468 "data_offset": 0, 00:34:08.468 "data_size": 0 00:34:08.468 }, 00:34:08.468 { 00:34:08.468 "name": "BaseBdev4", 00:34:08.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.468 "is_configured": false, 00:34:08.468 "data_offset": 0, 00:34:08.468 "data_size": 0 00:34:08.468 } 00:34:08.468 ] 00:34:08.468 }' 00:34:08.468 11:45:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:08.468 11:45:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:09.032 11:45:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:09.289 [2024-07-13 11:45:43.989630] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:09.289 BaseBdev2 00:34:09.289 11:45:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:34:09.289 11:45:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:09.289 11:45:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:09.289 11:45:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:09.289 11:45:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:09.289 11:45:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:09.289 11:45:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:09.546 11:45:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:09.803 [ 00:34:09.803 { 00:34:09.803 "name": "BaseBdev2", 00:34:09.803 "aliases": [ 00:34:09.803 "ee1433b2-4c82-4c71-8485-678079dfeedd" 00:34:09.803 ], 00:34:09.803 "product_name": "Malloc disk", 00:34:09.803 "block_size": 512, 00:34:09.803 "num_blocks": 65536, 00:34:09.803 "uuid": "ee1433b2-4c82-4c71-8485-678079dfeedd", 00:34:09.803 "assigned_rate_limits": { 00:34:09.803 "rw_ios_per_sec": 0, 00:34:09.803 "rw_mbytes_per_sec": 0, 00:34:09.803 "r_mbytes_per_sec": 0, 00:34:09.803 "w_mbytes_per_sec": 0 00:34:09.803 }, 00:34:09.803 "claimed": true, 00:34:09.803 "claim_type": "exclusive_write", 00:34:09.803 "zoned": false, 00:34:09.803 "supported_io_types": { 00:34:09.803 "read": true, 00:34:09.803 "write": true, 00:34:09.803 "unmap": true, 00:34:09.803 "flush": true, 00:34:09.803 "reset": true, 00:34:09.803 "nvme_admin": false, 00:34:09.803 "nvme_io": false, 00:34:09.803 "nvme_io_md": false, 00:34:09.803 "write_zeroes": true, 00:34:09.803 "zcopy": true, 00:34:09.803 "get_zone_info": false, 00:34:09.803 "zone_management": false, 00:34:09.803 "zone_append": false, 00:34:09.803 "compare": false, 00:34:09.803 "compare_and_write": false, 00:34:09.803 "abort": true, 00:34:09.803 "seek_hole": false, 00:34:09.803 "seek_data": false, 00:34:09.803 "copy": true, 00:34:09.803 "nvme_iov_md": false 00:34:09.803 }, 00:34:09.803 "memory_domains": [ 00:34:09.803 { 00:34:09.803 "dma_device_id": "system", 00:34:09.803 "dma_device_type": 1 00:34:09.803 }, 00:34:09.803 { 00:34:09.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:09.803 "dma_device_type": 2 00:34:09.803 } 00:34:09.803 ], 00:34:09.803 "driver_specific": {} 00:34:09.803 } 00:34:09.803 ] 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:09.803 "name": "Existed_Raid", 00:34:09.803 "uuid": "4e4fb223-5a0a-4cf1-be7c-7aaf00078936", 00:34:09.803 "strip_size_kb": 64, 00:34:09.803 "state": "configuring", 00:34:09.803 "raid_level": "raid5f", 00:34:09.803 "superblock": true, 00:34:09.803 "num_base_bdevs": 4, 00:34:09.803 "num_base_bdevs_discovered": 2, 00:34:09.803 "num_base_bdevs_operational": 4, 00:34:09.803 "base_bdevs_list": [ 00:34:09.803 { 00:34:09.803 "name": "BaseBdev1", 00:34:09.803 "uuid": "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0", 00:34:09.803 "is_configured": true, 00:34:09.803 "data_offset": 2048, 00:34:09.803 "data_size": 63488 00:34:09.803 }, 00:34:09.803 { 00:34:09.803 "name": "BaseBdev2", 00:34:09.803 "uuid": "ee1433b2-4c82-4c71-8485-678079dfeedd", 00:34:09.803 "is_configured": true, 00:34:09.803 "data_offset": 2048, 00:34:09.803 "data_size": 63488 00:34:09.803 }, 00:34:09.803 { 00:34:09.803 "name": "BaseBdev3", 00:34:09.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.803 "is_configured": false, 00:34:09.803 "data_offset": 0, 00:34:09.803 "data_size": 0 00:34:09.803 }, 00:34:09.803 { 00:34:09.803 "name": "BaseBdev4", 00:34:09.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.803 "is_configured": false, 00:34:09.803 "data_offset": 0, 00:34:09.803 "data_size": 0 00:34:09.803 } 00:34:09.803 ] 00:34:09.803 }' 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:09.803 11:45:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:10.735 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:10.735 [2024-07-13 11:45:45.425202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:10.735 BaseBdev3 00:34:10.735 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:34:10.735 11:45:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:10.735 11:45:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:10.735 11:45:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:10.735 11:45:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:10.735 11:45:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:10.735 11:45:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:10.993 11:45:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:11.251 [ 00:34:11.251 { 00:34:11.251 "name": "BaseBdev3", 00:34:11.251 "aliases": [ 00:34:11.251 "b2b380e8-78d3-4a2a-a737-4d5e4dcebb56" 00:34:11.251 ], 00:34:11.251 "product_name": "Malloc disk", 00:34:11.251 "block_size": 512, 00:34:11.251 "num_blocks": 65536, 00:34:11.251 "uuid": "b2b380e8-78d3-4a2a-a737-4d5e4dcebb56", 00:34:11.251 "assigned_rate_limits": { 00:34:11.251 "rw_ios_per_sec": 0, 00:34:11.251 "rw_mbytes_per_sec": 0, 00:34:11.251 "r_mbytes_per_sec": 0, 00:34:11.251 "w_mbytes_per_sec": 0 00:34:11.251 }, 00:34:11.251 "claimed": true, 00:34:11.251 "claim_type": "exclusive_write", 00:34:11.251 "zoned": false, 00:34:11.251 "supported_io_types": { 00:34:11.251 "read": true, 00:34:11.251 "write": true, 00:34:11.251 "unmap": true, 00:34:11.251 "flush": true, 00:34:11.251 "reset": true, 00:34:11.251 "nvme_admin": false, 00:34:11.251 "nvme_io": false, 00:34:11.251 "nvme_io_md": false, 00:34:11.251 "write_zeroes": true, 00:34:11.251 "zcopy": true, 00:34:11.251 "get_zone_info": false, 00:34:11.251 "zone_management": false, 00:34:11.251 "zone_append": false, 00:34:11.251 "compare": false, 00:34:11.251 "compare_and_write": false, 00:34:11.251 "abort": true, 00:34:11.251 "seek_hole": false, 00:34:11.251 "seek_data": false, 00:34:11.251 "copy": true, 00:34:11.251 "nvme_iov_md": false 00:34:11.251 }, 00:34:11.251 "memory_domains": [ 00:34:11.251 { 00:34:11.251 "dma_device_id": "system", 00:34:11.251 "dma_device_type": 1 00:34:11.251 }, 00:34:11.251 { 00:34:11.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:11.251 "dma_device_type": 2 00:34:11.251 } 00:34:11.251 ], 00:34:11.251 "driver_specific": {} 00:34:11.251 } 00:34:11.251 ] 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:11.251 "name": "Existed_Raid", 00:34:11.251 "uuid": "4e4fb223-5a0a-4cf1-be7c-7aaf00078936", 00:34:11.251 "strip_size_kb": 64, 00:34:11.251 "state": "configuring", 00:34:11.251 "raid_level": "raid5f", 00:34:11.251 "superblock": true, 00:34:11.251 "num_base_bdevs": 4, 00:34:11.251 "num_base_bdevs_discovered": 3, 00:34:11.251 "num_base_bdevs_operational": 4, 00:34:11.251 "base_bdevs_list": [ 00:34:11.251 { 00:34:11.251 "name": "BaseBdev1", 00:34:11.251 "uuid": "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0", 00:34:11.251 "is_configured": true, 00:34:11.251 "data_offset": 2048, 00:34:11.251 "data_size": 63488 00:34:11.251 }, 00:34:11.251 { 00:34:11.251 "name": "BaseBdev2", 00:34:11.251 "uuid": "ee1433b2-4c82-4c71-8485-678079dfeedd", 00:34:11.251 "is_configured": true, 00:34:11.251 "data_offset": 2048, 00:34:11.251 "data_size": 63488 00:34:11.251 }, 00:34:11.251 { 00:34:11.251 "name": "BaseBdev3", 00:34:11.251 "uuid": "b2b380e8-78d3-4a2a-a737-4d5e4dcebb56", 00:34:11.251 "is_configured": true, 00:34:11.251 "data_offset": 2048, 00:34:11.251 "data_size": 63488 00:34:11.251 }, 00:34:11.251 { 00:34:11.251 "name": "BaseBdev4", 00:34:11.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.251 "is_configured": false, 00:34:11.251 "data_offset": 0, 00:34:11.251 "data_size": 0 00:34:11.251 } 00:34:11.251 ] 00:34:11.251 }' 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:11.251 11:45:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:12.187 11:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:12.187 [2024-07-13 11:45:46.875568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:12.187 BaseBdev4 00:34:12.187 [2024-07-13 11:45:46.875804] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:34:12.187 [2024-07-13 11:45:46.875819] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:12.187 [2024-07-13 11:45:46.875981] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:34:12.187 [2024-07-13 11:45:46.881565] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:34:12.187 [2024-07-13 11:45:46.881591] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:34:12.187 [2024-07-13 11:45:46.881745] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:12.187 11:45:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:34:12.187 11:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:34:12.187 11:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:12.187 11:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:12.187 11:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:12.187 11:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:12.187 11:45:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:12.445 11:45:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:12.704 [ 00:34:12.704 { 00:34:12.704 "name": "BaseBdev4", 00:34:12.704 "aliases": [ 00:34:12.704 "193b1ef1-8a0f-4d39-9794-cf5e0e343fc4" 00:34:12.704 ], 00:34:12.704 "product_name": "Malloc disk", 00:34:12.704 "block_size": 512, 00:34:12.704 "num_blocks": 65536, 00:34:12.704 "uuid": "193b1ef1-8a0f-4d39-9794-cf5e0e343fc4", 00:34:12.704 "assigned_rate_limits": { 00:34:12.704 "rw_ios_per_sec": 0, 00:34:12.704 "rw_mbytes_per_sec": 0, 00:34:12.704 "r_mbytes_per_sec": 0, 00:34:12.704 "w_mbytes_per_sec": 0 00:34:12.704 }, 00:34:12.704 "claimed": true, 00:34:12.704 "claim_type": "exclusive_write", 00:34:12.704 "zoned": false, 00:34:12.704 "supported_io_types": { 00:34:12.704 "read": true, 00:34:12.704 "write": true, 00:34:12.704 "unmap": true, 00:34:12.704 "flush": true, 00:34:12.704 "reset": true, 00:34:12.704 "nvme_admin": false, 00:34:12.704 "nvme_io": false, 00:34:12.704 "nvme_io_md": false, 00:34:12.704 "write_zeroes": true, 00:34:12.704 "zcopy": true, 00:34:12.704 "get_zone_info": false, 00:34:12.704 "zone_management": false, 00:34:12.704 "zone_append": false, 00:34:12.704 "compare": false, 00:34:12.704 "compare_and_write": false, 00:34:12.704 "abort": true, 00:34:12.704 "seek_hole": false, 00:34:12.704 "seek_data": false, 00:34:12.704 "copy": true, 00:34:12.704 "nvme_iov_md": false 00:34:12.704 }, 00:34:12.704 "memory_domains": [ 00:34:12.704 { 00:34:12.704 "dma_device_id": "system", 00:34:12.704 "dma_device_type": 1 00:34:12.704 }, 00:34:12.704 { 00:34:12.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:12.704 "dma_device_type": 2 00:34:12.704 } 00:34:12.704 ], 00:34:12.704 "driver_specific": {} 00:34:12.704 } 00:34:12.704 ] 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:12.704 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:12.963 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:12.963 "name": "Existed_Raid", 00:34:12.963 "uuid": "4e4fb223-5a0a-4cf1-be7c-7aaf00078936", 00:34:12.963 "strip_size_kb": 64, 00:34:12.963 "state": "online", 00:34:12.963 "raid_level": "raid5f", 00:34:12.963 "superblock": true, 00:34:12.963 "num_base_bdevs": 4, 00:34:12.963 "num_base_bdevs_discovered": 4, 00:34:12.963 "num_base_bdevs_operational": 4, 00:34:12.963 "base_bdevs_list": [ 00:34:12.963 { 00:34:12.963 "name": "BaseBdev1", 00:34:12.963 "uuid": "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0", 00:34:12.963 "is_configured": true, 00:34:12.963 "data_offset": 2048, 00:34:12.963 "data_size": 63488 00:34:12.963 }, 00:34:12.963 { 00:34:12.963 "name": "BaseBdev2", 00:34:12.963 "uuid": "ee1433b2-4c82-4c71-8485-678079dfeedd", 00:34:12.963 "is_configured": true, 00:34:12.963 "data_offset": 2048, 00:34:12.963 "data_size": 63488 00:34:12.963 }, 00:34:12.963 { 00:34:12.963 "name": "BaseBdev3", 00:34:12.963 "uuid": "b2b380e8-78d3-4a2a-a737-4d5e4dcebb56", 00:34:12.963 "is_configured": true, 00:34:12.963 "data_offset": 2048, 00:34:12.963 "data_size": 63488 00:34:12.963 }, 00:34:12.963 { 00:34:12.963 "name": "BaseBdev4", 00:34:12.963 "uuid": "193b1ef1-8a0f-4d39-9794-cf5e0e343fc4", 00:34:12.963 "is_configured": true, 00:34:12.963 "data_offset": 2048, 00:34:12.963 "data_size": 63488 00:34:12.963 } 00:34:12.963 ] 00:34:12.963 }' 00:34:12.963 11:45:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:12.963 11:45:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:13.898 [2024-07-13 11:45:48.596291] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:13.898 "name": "Existed_Raid", 00:34:13.898 "aliases": [ 00:34:13.898 "4e4fb223-5a0a-4cf1-be7c-7aaf00078936" 00:34:13.898 ], 00:34:13.898 "product_name": "Raid Volume", 00:34:13.898 "block_size": 512, 00:34:13.898 "num_blocks": 190464, 00:34:13.898 "uuid": "4e4fb223-5a0a-4cf1-be7c-7aaf00078936", 00:34:13.898 "assigned_rate_limits": { 00:34:13.898 "rw_ios_per_sec": 0, 00:34:13.898 "rw_mbytes_per_sec": 0, 00:34:13.898 "r_mbytes_per_sec": 0, 00:34:13.898 "w_mbytes_per_sec": 0 00:34:13.898 }, 00:34:13.898 "claimed": false, 00:34:13.898 "zoned": false, 00:34:13.898 "supported_io_types": { 00:34:13.898 "read": true, 00:34:13.898 "write": true, 00:34:13.898 "unmap": false, 00:34:13.898 "flush": false, 00:34:13.898 "reset": true, 00:34:13.898 "nvme_admin": false, 00:34:13.898 "nvme_io": false, 00:34:13.898 "nvme_io_md": false, 00:34:13.898 "write_zeroes": true, 00:34:13.898 "zcopy": false, 00:34:13.898 "get_zone_info": false, 00:34:13.898 "zone_management": false, 00:34:13.898 "zone_append": false, 00:34:13.898 "compare": false, 00:34:13.898 "compare_and_write": false, 00:34:13.898 "abort": false, 00:34:13.898 "seek_hole": false, 00:34:13.898 "seek_data": false, 00:34:13.898 "copy": false, 00:34:13.898 "nvme_iov_md": false 00:34:13.898 }, 00:34:13.898 "driver_specific": { 00:34:13.898 "raid": { 00:34:13.898 "uuid": "4e4fb223-5a0a-4cf1-be7c-7aaf00078936", 00:34:13.898 "strip_size_kb": 64, 00:34:13.898 "state": "online", 00:34:13.898 "raid_level": "raid5f", 00:34:13.898 "superblock": true, 00:34:13.898 "num_base_bdevs": 4, 00:34:13.898 "num_base_bdevs_discovered": 4, 00:34:13.898 "num_base_bdevs_operational": 4, 00:34:13.898 "base_bdevs_list": [ 00:34:13.898 { 00:34:13.898 "name": "BaseBdev1", 00:34:13.898 "uuid": "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0", 00:34:13.898 "is_configured": true, 00:34:13.898 "data_offset": 2048, 00:34:13.898 "data_size": 63488 00:34:13.898 }, 00:34:13.898 { 00:34:13.898 "name": "BaseBdev2", 00:34:13.898 "uuid": "ee1433b2-4c82-4c71-8485-678079dfeedd", 00:34:13.898 "is_configured": true, 00:34:13.898 "data_offset": 2048, 00:34:13.898 "data_size": 63488 00:34:13.898 }, 00:34:13.898 { 00:34:13.898 "name": "BaseBdev3", 00:34:13.898 "uuid": "b2b380e8-78d3-4a2a-a737-4d5e4dcebb56", 00:34:13.898 "is_configured": true, 00:34:13.898 "data_offset": 2048, 00:34:13.898 "data_size": 63488 00:34:13.898 }, 00:34:13.898 { 00:34:13.898 "name": "BaseBdev4", 00:34:13.898 "uuid": "193b1ef1-8a0f-4d39-9794-cf5e0e343fc4", 00:34:13.898 "is_configured": true, 00:34:13.898 "data_offset": 2048, 00:34:13.898 "data_size": 63488 00:34:13.898 } 00:34:13.898 ] 00:34:13.898 } 00:34:13.898 } 00:34:13.898 }' 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:34:13.898 BaseBdev2 00:34:13.898 BaseBdev3 00:34:13.898 BaseBdev4' 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:13.898 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:14.156 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:14.156 "name": "BaseBdev1", 00:34:14.156 "aliases": [ 00:34:14.156 "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0" 00:34:14.156 ], 00:34:14.156 "product_name": "Malloc disk", 00:34:14.156 "block_size": 512, 00:34:14.156 "num_blocks": 65536, 00:34:14.156 "uuid": "ca90e259-9bdd-41c9-bbbf-52acfb5f98d0", 00:34:14.156 "assigned_rate_limits": { 00:34:14.156 "rw_ios_per_sec": 0, 00:34:14.156 "rw_mbytes_per_sec": 0, 00:34:14.156 "r_mbytes_per_sec": 0, 00:34:14.156 "w_mbytes_per_sec": 0 00:34:14.156 }, 00:34:14.156 "claimed": true, 00:34:14.156 "claim_type": "exclusive_write", 00:34:14.156 "zoned": false, 00:34:14.156 "supported_io_types": { 00:34:14.156 "read": true, 00:34:14.156 "write": true, 00:34:14.156 "unmap": true, 00:34:14.156 "flush": true, 00:34:14.156 "reset": true, 00:34:14.156 "nvme_admin": false, 00:34:14.156 "nvme_io": false, 00:34:14.156 "nvme_io_md": false, 00:34:14.156 "write_zeroes": true, 00:34:14.156 "zcopy": true, 00:34:14.156 "get_zone_info": false, 00:34:14.156 "zone_management": false, 00:34:14.156 "zone_append": false, 00:34:14.156 "compare": false, 00:34:14.156 "compare_and_write": false, 00:34:14.156 "abort": true, 00:34:14.156 "seek_hole": false, 00:34:14.156 "seek_data": false, 00:34:14.156 "copy": true, 00:34:14.156 "nvme_iov_md": false 00:34:14.156 }, 00:34:14.156 "memory_domains": [ 00:34:14.156 { 00:34:14.156 "dma_device_id": "system", 00:34:14.156 "dma_device_type": 1 00:34:14.156 }, 00:34:14.156 { 00:34:14.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:14.156 "dma_device_type": 2 00:34:14.156 } 00:34:14.156 ], 00:34:14.156 "driver_specific": {} 00:34:14.156 }' 00:34:14.156 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:14.413 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:14.413 11:45:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:14.413 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:14.413 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:14.413 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:14.413 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:14.670 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:14.670 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:14.670 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:14.670 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:14.670 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:14.671 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:14.671 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:14.671 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:14.928 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:14.928 "name": "BaseBdev2", 00:34:14.928 "aliases": [ 00:34:14.928 "ee1433b2-4c82-4c71-8485-678079dfeedd" 00:34:14.928 ], 00:34:14.928 "product_name": "Malloc disk", 00:34:14.928 "block_size": 512, 00:34:14.928 "num_blocks": 65536, 00:34:14.928 "uuid": "ee1433b2-4c82-4c71-8485-678079dfeedd", 00:34:14.928 "assigned_rate_limits": { 00:34:14.928 "rw_ios_per_sec": 0, 00:34:14.928 "rw_mbytes_per_sec": 0, 00:34:14.928 "r_mbytes_per_sec": 0, 00:34:14.928 "w_mbytes_per_sec": 0 00:34:14.928 }, 00:34:14.928 "claimed": true, 00:34:14.928 "claim_type": "exclusive_write", 00:34:14.928 "zoned": false, 00:34:14.928 "supported_io_types": { 00:34:14.928 "read": true, 00:34:14.928 "write": true, 00:34:14.928 "unmap": true, 00:34:14.928 "flush": true, 00:34:14.928 "reset": true, 00:34:14.928 "nvme_admin": false, 00:34:14.928 "nvme_io": false, 00:34:14.928 "nvme_io_md": false, 00:34:14.928 "write_zeroes": true, 00:34:14.928 "zcopy": true, 00:34:14.928 "get_zone_info": false, 00:34:14.928 "zone_management": false, 00:34:14.928 "zone_append": false, 00:34:14.928 "compare": false, 00:34:14.928 "compare_and_write": false, 00:34:14.928 "abort": true, 00:34:14.928 "seek_hole": false, 00:34:14.928 "seek_data": false, 00:34:14.928 "copy": true, 00:34:14.928 "nvme_iov_md": false 00:34:14.928 }, 00:34:14.928 "memory_domains": [ 00:34:14.928 { 00:34:14.928 "dma_device_id": "system", 00:34:14.928 "dma_device_type": 1 00:34:14.928 }, 00:34:14.928 { 00:34:14.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:14.928 "dma_device_type": 2 00:34:14.928 } 00:34:14.928 ], 00:34:14.928 "driver_specific": {} 00:34:14.928 }' 00:34:14.928 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:14.928 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.186 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:15.186 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.186 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.186 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:15.186 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.186 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.186 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:15.186 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:15.444 11:45:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:15.444 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:15.444 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:15.444 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:15.444 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:15.702 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:15.702 "name": "BaseBdev3", 00:34:15.702 "aliases": [ 00:34:15.702 "b2b380e8-78d3-4a2a-a737-4d5e4dcebb56" 00:34:15.702 ], 00:34:15.702 "product_name": "Malloc disk", 00:34:15.702 "block_size": 512, 00:34:15.702 "num_blocks": 65536, 00:34:15.702 "uuid": "b2b380e8-78d3-4a2a-a737-4d5e4dcebb56", 00:34:15.702 "assigned_rate_limits": { 00:34:15.702 "rw_ios_per_sec": 0, 00:34:15.702 "rw_mbytes_per_sec": 0, 00:34:15.702 "r_mbytes_per_sec": 0, 00:34:15.702 "w_mbytes_per_sec": 0 00:34:15.702 }, 00:34:15.702 "claimed": true, 00:34:15.702 "claim_type": "exclusive_write", 00:34:15.702 "zoned": false, 00:34:15.702 "supported_io_types": { 00:34:15.702 "read": true, 00:34:15.702 "write": true, 00:34:15.702 "unmap": true, 00:34:15.702 "flush": true, 00:34:15.702 "reset": true, 00:34:15.702 "nvme_admin": false, 00:34:15.702 "nvme_io": false, 00:34:15.702 "nvme_io_md": false, 00:34:15.702 "write_zeroes": true, 00:34:15.702 "zcopy": true, 00:34:15.702 "get_zone_info": false, 00:34:15.702 "zone_management": false, 00:34:15.702 "zone_append": false, 00:34:15.702 "compare": false, 00:34:15.702 "compare_and_write": false, 00:34:15.702 "abort": true, 00:34:15.702 "seek_hole": false, 00:34:15.702 "seek_data": false, 00:34:15.702 "copy": true, 00:34:15.702 "nvme_iov_md": false 00:34:15.702 }, 00:34:15.702 "memory_domains": [ 00:34:15.702 { 00:34:15.702 "dma_device_id": "system", 00:34:15.702 "dma_device_type": 1 00:34:15.702 }, 00:34:15.702 { 00:34:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:15.702 "dma_device_type": 2 00:34:15.702 } 00:34:15.702 ], 00:34:15.702 "driver_specific": {} 00:34:15.702 }' 00:34:15.702 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.702 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:15.702 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:15.702 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.702 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:15.961 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:15.961 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.961 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:15.961 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:15.961 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:15.961 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.219 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:16.219 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:16.219 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:34:16.219 11:45:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:16.477 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:16.477 "name": "BaseBdev4", 00:34:16.477 "aliases": [ 00:34:16.477 "193b1ef1-8a0f-4d39-9794-cf5e0e343fc4" 00:34:16.477 ], 00:34:16.477 "product_name": "Malloc disk", 00:34:16.477 "block_size": 512, 00:34:16.477 "num_blocks": 65536, 00:34:16.477 "uuid": "193b1ef1-8a0f-4d39-9794-cf5e0e343fc4", 00:34:16.477 "assigned_rate_limits": { 00:34:16.477 "rw_ios_per_sec": 0, 00:34:16.477 "rw_mbytes_per_sec": 0, 00:34:16.477 "r_mbytes_per_sec": 0, 00:34:16.477 "w_mbytes_per_sec": 0 00:34:16.477 }, 00:34:16.477 "claimed": true, 00:34:16.477 "claim_type": "exclusive_write", 00:34:16.477 "zoned": false, 00:34:16.477 "supported_io_types": { 00:34:16.477 "read": true, 00:34:16.477 "write": true, 00:34:16.477 "unmap": true, 00:34:16.477 "flush": true, 00:34:16.477 "reset": true, 00:34:16.477 "nvme_admin": false, 00:34:16.477 "nvme_io": false, 00:34:16.477 "nvme_io_md": false, 00:34:16.477 "write_zeroes": true, 00:34:16.477 "zcopy": true, 00:34:16.477 "get_zone_info": false, 00:34:16.477 "zone_management": false, 00:34:16.477 "zone_append": false, 00:34:16.477 "compare": false, 00:34:16.477 "compare_and_write": false, 00:34:16.477 "abort": true, 00:34:16.477 "seek_hole": false, 00:34:16.477 "seek_data": false, 00:34:16.477 "copy": true, 00:34:16.477 "nvme_iov_md": false 00:34:16.477 }, 00:34:16.477 "memory_domains": [ 00:34:16.477 { 00:34:16.477 "dma_device_id": "system", 00:34:16.477 "dma_device_type": 1 00:34:16.477 }, 00:34:16.477 { 00:34:16.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:16.477 "dma_device_type": 2 00:34:16.477 } 00:34:16.477 ], 00:34:16.477 "driver_specific": {} 00:34:16.477 }' 00:34:16.477 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:16.477 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:16.477 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:16.477 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:16.477 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:16.741 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:16.741 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:16.741 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:16.741 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:16.741 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.741 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:16.741 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:16.741 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:17.009 [2024-07-13 11:45:51.728858] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:17.274 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:17.275 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.275 11:45:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:17.533 11:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:17.533 "name": "Existed_Raid", 00:34:17.533 "uuid": "4e4fb223-5a0a-4cf1-be7c-7aaf00078936", 00:34:17.533 "strip_size_kb": 64, 00:34:17.533 "state": "online", 00:34:17.533 "raid_level": "raid5f", 00:34:17.533 "superblock": true, 00:34:17.533 "num_base_bdevs": 4, 00:34:17.533 "num_base_bdevs_discovered": 3, 00:34:17.533 "num_base_bdevs_operational": 3, 00:34:17.533 "base_bdevs_list": [ 00:34:17.533 { 00:34:17.533 "name": null, 00:34:17.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:17.533 "is_configured": false, 00:34:17.533 "data_offset": 2048, 00:34:17.533 "data_size": 63488 00:34:17.533 }, 00:34:17.533 { 00:34:17.533 "name": "BaseBdev2", 00:34:17.533 "uuid": "ee1433b2-4c82-4c71-8485-678079dfeedd", 00:34:17.533 "is_configured": true, 00:34:17.533 "data_offset": 2048, 00:34:17.533 "data_size": 63488 00:34:17.533 }, 00:34:17.533 { 00:34:17.533 "name": "BaseBdev3", 00:34:17.533 "uuid": "b2b380e8-78d3-4a2a-a737-4d5e4dcebb56", 00:34:17.533 "is_configured": true, 00:34:17.533 "data_offset": 2048, 00:34:17.533 "data_size": 63488 00:34:17.533 }, 00:34:17.533 { 00:34:17.533 "name": "BaseBdev4", 00:34:17.533 "uuid": "193b1ef1-8a0f-4d39-9794-cf5e0e343fc4", 00:34:17.533 "is_configured": true, 00:34:17.533 "data_offset": 2048, 00:34:17.533 "data_size": 63488 00:34:17.533 } 00:34:17.533 ] 00:34:17.533 }' 00:34:17.533 11:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:17.533 11:45:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:18.100 11:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:34:18.100 11:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:18.100 11:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.100 11:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:18.358 11:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:18.358 11:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:18.358 11:45:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:34:18.615 [2024-07-13 11:45:53.129448] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:18.615 [2024-07-13 11:45:53.129644] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:18.615 [2024-07-13 11:45:53.197616] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:18.615 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:18.615 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:18.615 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.615 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:18.873 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:18.873 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:18.873 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:34:19.131 [2024-07-13 11:45:53.641765] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:19.131 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:19.131 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:19.131 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.131 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:34:19.388 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:34:19.388 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:19.388 11:45:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:34:19.646 [2024-07-13 11:45:54.177608] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:34:19.646 [2024-07-13 11:45:54.177670] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:34:19.646 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:34:19.646 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:34:19.646 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.646 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:34:19.905 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:34:19.905 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:34:19.905 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:34:19.905 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:34:19.905 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:19.905 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:34:20.163 BaseBdev2 00:34:20.163 11:45:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:34:20.163 11:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:34:20.163 11:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:20.163 11:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:20.163 11:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:20.163 11:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:20.163 11:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:20.163 11:45:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:20.422 [ 00:34:20.422 { 00:34:20.422 "name": "BaseBdev2", 00:34:20.422 "aliases": [ 00:34:20.422 "d4451abf-1520-4ec6-aa5a-0c13fd07577d" 00:34:20.422 ], 00:34:20.422 "product_name": "Malloc disk", 00:34:20.422 "block_size": 512, 00:34:20.422 "num_blocks": 65536, 00:34:20.422 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:20.422 "assigned_rate_limits": { 00:34:20.422 "rw_ios_per_sec": 0, 00:34:20.422 "rw_mbytes_per_sec": 0, 00:34:20.422 "r_mbytes_per_sec": 0, 00:34:20.422 "w_mbytes_per_sec": 0 00:34:20.422 }, 00:34:20.422 "claimed": false, 00:34:20.422 "zoned": false, 00:34:20.422 "supported_io_types": { 00:34:20.422 "read": true, 00:34:20.422 "write": true, 00:34:20.422 "unmap": true, 00:34:20.422 "flush": true, 00:34:20.422 "reset": true, 00:34:20.422 "nvme_admin": false, 00:34:20.422 "nvme_io": false, 00:34:20.422 "nvme_io_md": false, 00:34:20.422 "write_zeroes": true, 00:34:20.422 "zcopy": true, 00:34:20.422 "get_zone_info": false, 00:34:20.422 "zone_management": false, 00:34:20.422 "zone_append": false, 00:34:20.422 "compare": false, 00:34:20.422 "compare_and_write": false, 00:34:20.422 "abort": true, 00:34:20.422 "seek_hole": false, 00:34:20.422 "seek_data": false, 00:34:20.422 "copy": true, 00:34:20.422 "nvme_iov_md": false 00:34:20.422 }, 00:34:20.422 "memory_domains": [ 00:34:20.422 { 00:34:20.422 "dma_device_id": "system", 00:34:20.422 "dma_device_type": 1 00:34:20.422 }, 00:34:20.422 { 00:34:20.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:20.422 "dma_device_type": 2 00:34:20.422 } 00:34:20.422 ], 00:34:20.422 "driver_specific": {} 00:34:20.422 } 00:34:20.422 ] 00:34:20.422 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:20.422 11:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:20.422 11:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:20.422 11:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:34:20.682 BaseBdev3 00:34:20.682 11:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:34:20.682 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:34:20.682 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:20.682 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:20.682 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:20.682 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:20.682 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:20.940 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:20.940 [ 00:34:20.940 { 00:34:20.940 "name": "BaseBdev3", 00:34:20.940 "aliases": [ 00:34:20.940 "2c240978-a5bd-4790-82b2-3c82d53d3dc2" 00:34:20.940 ], 00:34:20.940 "product_name": "Malloc disk", 00:34:20.941 "block_size": 512, 00:34:20.941 "num_blocks": 65536, 00:34:20.941 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:20.941 "assigned_rate_limits": { 00:34:20.941 "rw_ios_per_sec": 0, 00:34:20.941 "rw_mbytes_per_sec": 0, 00:34:20.941 "r_mbytes_per_sec": 0, 00:34:20.941 "w_mbytes_per_sec": 0 00:34:20.941 }, 00:34:20.941 "claimed": false, 00:34:20.941 "zoned": false, 00:34:20.941 "supported_io_types": { 00:34:20.941 "read": true, 00:34:20.941 "write": true, 00:34:20.941 "unmap": true, 00:34:20.941 "flush": true, 00:34:20.941 "reset": true, 00:34:20.941 "nvme_admin": false, 00:34:20.941 "nvme_io": false, 00:34:20.941 "nvme_io_md": false, 00:34:20.941 "write_zeroes": true, 00:34:20.941 "zcopy": true, 00:34:20.941 "get_zone_info": false, 00:34:20.941 "zone_management": false, 00:34:20.941 "zone_append": false, 00:34:20.941 "compare": false, 00:34:20.941 "compare_and_write": false, 00:34:20.941 "abort": true, 00:34:20.941 "seek_hole": false, 00:34:20.941 "seek_data": false, 00:34:20.941 "copy": true, 00:34:20.941 "nvme_iov_md": false 00:34:20.941 }, 00:34:20.941 "memory_domains": [ 00:34:20.941 { 00:34:20.941 "dma_device_id": "system", 00:34:20.941 "dma_device_type": 1 00:34:20.941 }, 00:34:20.941 { 00:34:20.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:20.941 "dma_device_type": 2 00:34:20.941 } 00:34:20.941 ], 00:34:20.941 "driver_specific": {} 00:34:20.941 } 00:34:20.941 ] 00:34:21.199 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:21.199 11:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:21.200 11:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:21.200 11:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:34:21.200 BaseBdev4 00:34:21.200 11:45:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:34:21.200 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:34:21.200 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:21.200 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:21.200 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:21.200 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:21.200 11:45:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:21.458 11:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:21.716 [ 00:34:21.716 { 00:34:21.716 "name": "BaseBdev4", 00:34:21.716 "aliases": [ 00:34:21.716 "5af1ca4f-5d68-420e-8029-9cea2a70abd9" 00:34:21.716 ], 00:34:21.716 "product_name": "Malloc disk", 00:34:21.716 "block_size": 512, 00:34:21.716 "num_blocks": 65536, 00:34:21.716 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:21.716 "assigned_rate_limits": { 00:34:21.716 "rw_ios_per_sec": 0, 00:34:21.716 "rw_mbytes_per_sec": 0, 00:34:21.716 "r_mbytes_per_sec": 0, 00:34:21.716 "w_mbytes_per_sec": 0 00:34:21.716 }, 00:34:21.716 "claimed": false, 00:34:21.716 "zoned": false, 00:34:21.716 "supported_io_types": { 00:34:21.716 "read": true, 00:34:21.716 "write": true, 00:34:21.716 "unmap": true, 00:34:21.716 "flush": true, 00:34:21.716 "reset": true, 00:34:21.716 "nvme_admin": false, 00:34:21.716 "nvme_io": false, 00:34:21.716 "nvme_io_md": false, 00:34:21.716 "write_zeroes": true, 00:34:21.716 "zcopy": true, 00:34:21.716 "get_zone_info": false, 00:34:21.716 "zone_management": false, 00:34:21.716 "zone_append": false, 00:34:21.716 "compare": false, 00:34:21.716 "compare_and_write": false, 00:34:21.716 "abort": true, 00:34:21.716 "seek_hole": false, 00:34:21.716 "seek_data": false, 00:34:21.716 "copy": true, 00:34:21.716 "nvme_iov_md": false 00:34:21.716 }, 00:34:21.716 "memory_domains": [ 00:34:21.716 { 00:34:21.716 "dma_device_id": "system", 00:34:21.716 "dma_device_type": 1 00:34:21.716 }, 00:34:21.716 { 00:34:21.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:21.716 "dma_device_type": 2 00:34:21.716 } 00:34:21.716 ], 00:34:21.716 "driver_specific": {} 00:34:21.716 } 00:34:21.716 ] 00:34:21.716 11:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:21.716 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:34:21.716 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:34:21.716 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:34:21.974 [2024-07-13 11:45:56.556052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:21.974 [2024-07-13 11:45:56.556119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:21.974 [2024-07-13 11:45:56.556159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:21.974 [2024-07-13 11:45:56.557780] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:21.974 [2024-07-13 11:45:56.557842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.974 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:22.232 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:22.232 "name": "Existed_Raid", 00:34:22.232 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:22.232 "strip_size_kb": 64, 00:34:22.232 "state": "configuring", 00:34:22.232 "raid_level": "raid5f", 00:34:22.232 "superblock": true, 00:34:22.232 "num_base_bdevs": 4, 00:34:22.232 "num_base_bdevs_discovered": 3, 00:34:22.232 "num_base_bdevs_operational": 4, 00:34:22.232 "base_bdevs_list": [ 00:34:22.232 { 00:34:22.232 "name": "BaseBdev1", 00:34:22.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.232 "is_configured": false, 00:34:22.232 "data_offset": 0, 00:34:22.232 "data_size": 0 00:34:22.232 }, 00:34:22.232 { 00:34:22.232 "name": "BaseBdev2", 00:34:22.232 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:22.232 "is_configured": true, 00:34:22.232 "data_offset": 2048, 00:34:22.232 "data_size": 63488 00:34:22.232 }, 00:34:22.232 { 00:34:22.232 "name": "BaseBdev3", 00:34:22.232 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:22.232 "is_configured": true, 00:34:22.232 "data_offset": 2048, 00:34:22.232 "data_size": 63488 00:34:22.232 }, 00:34:22.232 { 00:34:22.232 "name": "BaseBdev4", 00:34:22.232 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:22.232 "is_configured": true, 00:34:22.232 "data_offset": 2048, 00:34:22.232 "data_size": 63488 00:34:22.232 } 00:34:22.232 ] 00:34:22.232 }' 00:34:22.232 11:45:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:22.232 11:45:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:22.798 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:34:23.056 [2024-07-13 11:45:57.644236] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.056 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:23.314 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:23.314 "name": "Existed_Raid", 00:34:23.314 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:23.314 "strip_size_kb": 64, 00:34:23.314 "state": "configuring", 00:34:23.314 "raid_level": "raid5f", 00:34:23.314 "superblock": true, 00:34:23.314 "num_base_bdevs": 4, 00:34:23.314 "num_base_bdevs_discovered": 2, 00:34:23.314 "num_base_bdevs_operational": 4, 00:34:23.314 "base_bdevs_list": [ 00:34:23.314 { 00:34:23.314 "name": "BaseBdev1", 00:34:23.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.314 "is_configured": false, 00:34:23.314 "data_offset": 0, 00:34:23.314 "data_size": 0 00:34:23.314 }, 00:34:23.314 { 00:34:23.314 "name": null, 00:34:23.314 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:23.314 "is_configured": false, 00:34:23.314 "data_offset": 2048, 00:34:23.314 "data_size": 63488 00:34:23.314 }, 00:34:23.314 { 00:34:23.314 "name": "BaseBdev3", 00:34:23.314 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:23.314 "is_configured": true, 00:34:23.314 "data_offset": 2048, 00:34:23.314 "data_size": 63488 00:34:23.314 }, 00:34:23.314 { 00:34:23.314 "name": "BaseBdev4", 00:34:23.314 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:23.314 "is_configured": true, 00:34:23.314 "data_offset": 2048, 00:34:23.314 "data_size": 63488 00:34:23.314 } 00:34:23.314 ] 00:34:23.315 }' 00:34:23.315 11:45:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:23.315 11:45:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:23.882 11:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.882 11:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:24.141 11:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:34:24.141 11:45:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:34:24.400 [2024-07-13 11:45:58.997494] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:24.401 BaseBdev1 00:34:24.401 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:34:24.401 11:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:34:24.401 11:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:24.401 11:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:24.401 11:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:24.401 11:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:24.401 11:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:24.660 [ 00:34:24.660 { 00:34:24.660 "name": "BaseBdev1", 00:34:24.660 "aliases": [ 00:34:24.660 "93e52bd5-4f68-4de3-b67b-a068f57e009a" 00:34:24.660 ], 00:34:24.660 "product_name": "Malloc disk", 00:34:24.660 "block_size": 512, 00:34:24.660 "num_blocks": 65536, 00:34:24.660 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:24.660 "assigned_rate_limits": { 00:34:24.660 "rw_ios_per_sec": 0, 00:34:24.660 "rw_mbytes_per_sec": 0, 00:34:24.660 "r_mbytes_per_sec": 0, 00:34:24.660 "w_mbytes_per_sec": 0 00:34:24.660 }, 00:34:24.660 "claimed": true, 00:34:24.660 "claim_type": "exclusive_write", 00:34:24.660 "zoned": false, 00:34:24.660 "supported_io_types": { 00:34:24.660 "read": true, 00:34:24.660 "write": true, 00:34:24.660 "unmap": true, 00:34:24.660 "flush": true, 00:34:24.660 "reset": true, 00:34:24.660 "nvme_admin": false, 00:34:24.660 "nvme_io": false, 00:34:24.660 "nvme_io_md": false, 00:34:24.660 "write_zeroes": true, 00:34:24.660 "zcopy": true, 00:34:24.660 "get_zone_info": false, 00:34:24.660 "zone_management": false, 00:34:24.660 "zone_append": false, 00:34:24.660 "compare": false, 00:34:24.660 "compare_and_write": false, 00:34:24.660 "abort": true, 00:34:24.660 "seek_hole": false, 00:34:24.660 "seek_data": false, 00:34:24.660 "copy": true, 00:34:24.660 "nvme_iov_md": false 00:34:24.660 }, 00:34:24.660 "memory_domains": [ 00:34:24.660 { 00:34:24.660 "dma_device_id": "system", 00:34:24.660 "dma_device_type": 1 00:34:24.660 }, 00:34:24.660 { 00:34:24.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:24.660 "dma_device_type": 2 00:34:24.660 } 00:34:24.660 ], 00:34:24.660 "driver_specific": {} 00:34:24.660 } 00:34:24.660 ] 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.660 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:24.919 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:24.919 "name": "Existed_Raid", 00:34:24.919 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:24.919 "strip_size_kb": 64, 00:34:24.919 "state": "configuring", 00:34:24.919 "raid_level": "raid5f", 00:34:24.919 "superblock": true, 00:34:24.919 "num_base_bdevs": 4, 00:34:24.919 "num_base_bdevs_discovered": 3, 00:34:24.919 "num_base_bdevs_operational": 4, 00:34:24.919 "base_bdevs_list": [ 00:34:24.919 { 00:34:24.919 "name": "BaseBdev1", 00:34:24.919 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:24.919 "is_configured": true, 00:34:24.919 "data_offset": 2048, 00:34:24.919 "data_size": 63488 00:34:24.919 }, 00:34:24.919 { 00:34:24.919 "name": null, 00:34:24.919 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:24.919 "is_configured": false, 00:34:24.919 "data_offset": 2048, 00:34:24.919 "data_size": 63488 00:34:24.919 }, 00:34:24.919 { 00:34:24.919 "name": "BaseBdev3", 00:34:24.919 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:24.919 "is_configured": true, 00:34:24.919 "data_offset": 2048, 00:34:24.919 "data_size": 63488 00:34:24.919 }, 00:34:24.919 { 00:34:24.919 "name": "BaseBdev4", 00:34:24.919 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:24.919 "is_configured": true, 00:34:24.919 "data_offset": 2048, 00:34:24.919 "data_size": 63488 00:34:24.919 } 00:34:24.919 ] 00:34:24.919 }' 00:34:24.919 11:45:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:24.919 11:45:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:25.855 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.855 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:25.855 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:34:25.855 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:34:26.114 [2024-07-13 11:46:00.772425] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.114 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:26.373 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:26.373 "name": "Existed_Raid", 00:34:26.373 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:26.373 "strip_size_kb": 64, 00:34:26.373 "state": "configuring", 00:34:26.373 "raid_level": "raid5f", 00:34:26.373 "superblock": true, 00:34:26.373 "num_base_bdevs": 4, 00:34:26.373 "num_base_bdevs_discovered": 2, 00:34:26.373 "num_base_bdevs_operational": 4, 00:34:26.373 "base_bdevs_list": [ 00:34:26.373 { 00:34:26.373 "name": "BaseBdev1", 00:34:26.373 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:26.373 "is_configured": true, 00:34:26.373 "data_offset": 2048, 00:34:26.373 "data_size": 63488 00:34:26.373 }, 00:34:26.373 { 00:34:26.373 "name": null, 00:34:26.373 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:26.373 "is_configured": false, 00:34:26.373 "data_offset": 2048, 00:34:26.373 "data_size": 63488 00:34:26.373 }, 00:34:26.373 { 00:34:26.373 "name": null, 00:34:26.373 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:26.373 "is_configured": false, 00:34:26.373 "data_offset": 2048, 00:34:26.373 "data_size": 63488 00:34:26.373 }, 00:34:26.373 { 00:34:26.373 "name": "BaseBdev4", 00:34:26.373 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:26.373 "is_configured": true, 00:34:26.373 "data_offset": 2048, 00:34:26.373 "data_size": 63488 00:34:26.373 } 00:34:26.373 ] 00:34:26.373 }' 00:34:26.373 11:46:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:26.373 11:46:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:26.940 11:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.940 11:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:27.199 11:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:34:27.199 11:46:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:27.456 [2024-07-13 11:46:02.089060] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:27.456 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.457 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:27.715 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:27.715 "name": "Existed_Raid", 00:34:27.715 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:27.715 "strip_size_kb": 64, 00:34:27.715 "state": "configuring", 00:34:27.715 "raid_level": "raid5f", 00:34:27.715 "superblock": true, 00:34:27.715 "num_base_bdevs": 4, 00:34:27.715 "num_base_bdevs_discovered": 3, 00:34:27.715 "num_base_bdevs_operational": 4, 00:34:27.715 "base_bdevs_list": [ 00:34:27.715 { 00:34:27.715 "name": "BaseBdev1", 00:34:27.715 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:27.715 "is_configured": true, 00:34:27.715 "data_offset": 2048, 00:34:27.715 "data_size": 63488 00:34:27.715 }, 00:34:27.715 { 00:34:27.715 "name": null, 00:34:27.715 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:27.715 "is_configured": false, 00:34:27.715 "data_offset": 2048, 00:34:27.715 "data_size": 63488 00:34:27.715 }, 00:34:27.715 { 00:34:27.715 "name": "BaseBdev3", 00:34:27.715 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:27.715 "is_configured": true, 00:34:27.715 "data_offset": 2048, 00:34:27.715 "data_size": 63488 00:34:27.715 }, 00:34:27.715 { 00:34:27.715 "name": "BaseBdev4", 00:34:27.715 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:27.715 "is_configured": true, 00:34:27.715 "data_offset": 2048, 00:34:27.715 "data_size": 63488 00:34:27.715 } 00:34:27.715 ] 00:34:27.715 }' 00:34:27.715 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:27.715 11:46:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:28.282 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.282 11:46:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:28.540 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:34:28.540 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:34:28.540 [2024-07-13 11:46:03.245282] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:28.798 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:28.799 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.799 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:28.799 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:28.799 "name": "Existed_Raid", 00:34:28.799 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:28.799 "strip_size_kb": 64, 00:34:28.799 "state": "configuring", 00:34:28.799 "raid_level": "raid5f", 00:34:28.799 "superblock": true, 00:34:28.799 "num_base_bdevs": 4, 00:34:28.799 "num_base_bdevs_discovered": 2, 00:34:28.799 "num_base_bdevs_operational": 4, 00:34:28.799 "base_bdevs_list": [ 00:34:28.799 { 00:34:28.799 "name": null, 00:34:28.799 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:28.799 "is_configured": false, 00:34:28.799 "data_offset": 2048, 00:34:28.799 "data_size": 63488 00:34:28.799 }, 00:34:28.799 { 00:34:28.799 "name": null, 00:34:28.799 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:28.799 "is_configured": false, 00:34:28.799 "data_offset": 2048, 00:34:28.799 "data_size": 63488 00:34:28.799 }, 00:34:28.799 { 00:34:28.799 "name": "BaseBdev3", 00:34:28.799 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:28.799 "is_configured": true, 00:34:28.799 "data_offset": 2048, 00:34:28.799 "data_size": 63488 00:34:28.799 }, 00:34:28.799 { 00:34:28.799 "name": "BaseBdev4", 00:34:28.799 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:28.799 "is_configured": true, 00:34:28.799 "data_offset": 2048, 00:34:28.799 "data_size": 63488 00:34:28.799 } 00:34:28.799 ] 00:34:28.799 }' 00:34:28.799 11:46:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:28.799 11:46:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:29.732 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.732 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:29.732 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:34:29.732 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:29.990 [2024-07-13 11:46:04.711393] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.990 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:30.248 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:30.248 "name": "Existed_Raid", 00:34:30.248 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:30.248 "strip_size_kb": 64, 00:34:30.248 "state": "configuring", 00:34:30.248 "raid_level": "raid5f", 00:34:30.248 "superblock": true, 00:34:30.248 "num_base_bdevs": 4, 00:34:30.248 "num_base_bdevs_discovered": 3, 00:34:30.248 "num_base_bdevs_operational": 4, 00:34:30.248 "base_bdevs_list": [ 00:34:30.248 { 00:34:30.248 "name": null, 00:34:30.248 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:30.248 "is_configured": false, 00:34:30.248 "data_offset": 2048, 00:34:30.248 "data_size": 63488 00:34:30.248 }, 00:34:30.248 { 00:34:30.248 "name": "BaseBdev2", 00:34:30.248 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:30.248 "is_configured": true, 00:34:30.248 "data_offset": 2048, 00:34:30.248 "data_size": 63488 00:34:30.248 }, 00:34:30.248 { 00:34:30.248 "name": "BaseBdev3", 00:34:30.248 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:30.248 "is_configured": true, 00:34:30.248 "data_offset": 2048, 00:34:30.248 "data_size": 63488 00:34:30.248 }, 00:34:30.248 { 00:34:30.248 "name": "BaseBdev4", 00:34:30.249 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:30.249 "is_configured": true, 00:34:30.249 "data_offset": 2048, 00:34:30.249 "data_size": 63488 00:34:30.249 } 00:34:30.249 ] 00:34:30.249 }' 00:34:30.249 11:46:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:30.249 11:46:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:31.181 11:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:31.181 11:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.181 11:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:34:31.181 11:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:31.181 11:46:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.440 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 93e52bd5-4f68-4de3-b67b-a068f57e009a 00:34:31.698 [2024-07-13 11:46:06.401511] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:31.698 [2024-07-13 11:46:06.401730] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:34:31.698 [2024-07-13 11:46:06.401746] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:31.698 [2024-07-13 11:46:06.401859] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:31.698 NewBaseBdev 00:34:31.699 [2024-07-13 11:46:06.407034] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:34:31.699 [2024-07-13 11:46:06.407059] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009680 00:34:31.699 [2024-07-13 11:46:06.407199] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:31.699 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:34:31.699 11:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:34:31.699 11:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:34:31.699 11:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:34:31.699 11:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:34:31.699 11:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:34:31.699 11:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:31.957 11:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:32.215 [ 00:34:32.215 { 00:34:32.215 "name": "NewBaseBdev", 00:34:32.215 "aliases": [ 00:34:32.215 "93e52bd5-4f68-4de3-b67b-a068f57e009a" 00:34:32.215 ], 00:34:32.215 "product_name": "Malloc disk", 00:34:32.215 "block_size": 512, 00:34:32.215 "num_blocks": 65536, 00:34:32.215 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:32.215 "assigned_rate_limits": { 00:34:32.215 "rw_ios_per_sec": 0, 00:34:32.215 "rw_mbytes_per_sec": 0, 00:34:32.215 "r_mbytes_per_sec": 0, 00:34:32.215 "w_mbytes_per_sec": 0 00:34:32.215 }, 00:34:32.215 "claimed": true, 00:34:32.215 "claim_type": "exclusive_write", 00:34:32.215 "zoned": false, 00:34:32.215 "supported_io_types": { 00:34:32.215 "read": true, 00:34:32.215 "write": true, 00:34:32.215 "unmap": true, 00:34:32.215 "flush": true, 00:34:32.215 "reset": true, 00:34:32.215 "nvme_admin": false, 00:34:32.215 "nvme_io": false, 00:34:32.215 "nvme_io_md": false, 00:34:32.215 "write_zeroes": true, 00:34:32.215 "zcopy": true, 00:34:32.215 "get_zone_info": false, 00:34:32.215 "zone_management": false, 00:34:32.215 "zone_append": false, 00:34:32.215 "compare": false, 00:34:32.215 "compare_and_write": false, 00:34:32.215 "abort": true, 00:34:32.215 "seek_hole": false, 00:34:32.215 "seek_data": false, 00:34:32.215 "copy": true, 00:34:32.215 "nvme_iov_md": false 00:34:32.215 }, 00:34:32.215 "memory_domains": [ 00:34:32.215 { 00:34:32.215 "dma_device_id": "system", 00:34:32.215 "dma_device_type": 1 00:34:32.215 }, 00:34:32.215 { 00:34:32.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:32.215 "dma_device_type": 2 00:34:32.215 } 00:34:32.215 ], 00:34:32.215 "driver_specific": {} 00:34:32.215 } 00:34:32.215 ] 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.215 11:46:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:32.473 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:32.473 "name": "Existed_Raid", 00:34:32.473 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:32.473 "strip_size_kb": 64, 00:34:32.473 "state": "online", 00:34:32.473 "raid_level": "raid5f", 00:34:32.473 "superblock": true, 00:34:32.473 "num_base_bdevs": 4, 00:34:32.473 "num_base_bdevs_discovered": 4, 00:34:32.473 "num_base_bdevs_operational": 4, 00:34:32.473 "base_bdevs_list": [ 00:34:32.473 { 00:34:32.473 "name": "NewBaseBdev", 00:34:32.473 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:32.473 "is_configured": true, 00:34:32.473 "data_offset": 2048, 00:34:32.473 "data_size": 63488 00:34:32.473 }, 00:34:32.473 { 00:34:32.473 "name": "BaseBdev2", 00:34:32.473 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:32.473 "is_configured": true, 00:34:32.473 "data_offset": 2048, 00:34:32.473 "data_size": 63488 00:34:32.473 }, 00:34:32.473 { 00:34:32.473 "name": "BaseBdev3", 00:34:32.473 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:32.473 "is_configured": true, 00:34:32.473 "data_offset": 2048, 00:34:32.473 "data_size": 63488 00:34:32.473 }, 00:34:32.473 { 00:34:32.473 "name": "BaseBdev4", 00:34:32.473 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:32.473 "is_configured": true, 00:34:32.473 "data_offset": 2048, 00:34:32.473 "data_size": 63488 00:34:32.473 } 00:34:32.473 ] 00:34:32.473 }' 00:34:32.473 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:32.473 11:46:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.039 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:34:33.039 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:33.039 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:33.039 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:33.039 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:33.039 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:34:33.039 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:33.039 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:33.298 [2024-07-13 11:46:07.957617] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:33.298 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:33.298 "name": "Existed_Raid", 00:34:33.298 "aliases": [ 00:34:33.298 "615bbb5c-124b-4d45-9377-cab5b501229f" 00:34:33.298 ], 00:34:33.298 "product_name": "Raid Volume", 00:34:33.298 "block_size": 512, 00:34:33.298 "num_blocks": 190464, 00:34:33.298 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:33.298 "assigned_rate_limits": { 00:34:33.298 "rw_ios_per_sec": 0, 00:34:33.298 "rw_mbytes_per_sec": 0, 00:34:33.298 "r_mbytes_per_sec": 0, 00:34:33.298 "w_mbytes_per_sec": 0 00:34:33.298 }, 00:34:33.298 "claimed": false, 00:34:33.298 "zoned": false, 00:34:33.298 "supported_io_types": { 00:34:33.298 "read": true, 00:34:33.298 "write": true, 00:34:33.298 "unmap": false, 00:34:33.298 "flush": false, 00:34:33.298 "reset": true, 00:34:33.298 "nvme_admin": false, 00:34:33.298 "nvme_io": false, 00:34:33.298 "nvme_io_md": false, 00:34:33.298 "write_zeroes": true, 00:34:33.298 "zcopy": false, 00:34:33.298 "get_zone_info": false, 00:34:33.298 "zone_management": false, 00:34:33.298 "zone_append": false, 00:34:33.298 "compare": false, 00:34:33.298 "compare_and_write": false, 00:34:33.298 "abort": false, 00:34:33.298 "seek_hole": false, 00:34:33.298 "seek_data": false, 00:34:33.298 "copy": false, 00:34:33.298 "nvme_iov_md": false 00:34:33.298 }, 00:34:33.298 "driver_specific": { 00:34:33.298 "raid": { 00:34:33.298 "uuid": "615bbb5c-124b-4d45-9377-cab5b501229f", 00:34:33.298 "strip_size_kb": 64, 00:34:33.298 "state": "online", 00:34:33.298 "raid_level": "raid5f", 00:34:33.298 "superblock": true, 00:34:33.298 "num_base_bdevs": 4, 00:34:33.298 "num_base_bdevs_discovered": 4, 00:34:33.298 "num_base_bdevs_operational": 4, 00:34:33.298 "base_bdevs_list": [ 00:34:33.298 { 00:34:33.298 "name": "NewBaseBdev", 00:34:33.298 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:33.298 "is_configured": true, 00:34:33.298 "data_offset": 2048, 00:34:33.298 "data_size": 63488 00:34:33.298 }, 00:34:33.298 { 00:34:33.298 "name": "BaseBdev2", 00:34:33.298 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:33.298 "is_configured": true, 00:34:33.298 "data_offset": 2048, 00:34:33.298 "data_size": 63488 00:34:33.298 }, 00:34:33.298 { 00:34:33.298 "name": "BaseBdev3", 00:34:33.298 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:33.298 "is_configured": true, 00:34:33.298 "data_offset": 2048, 00:34:33.298 "data_size": 63488 00:34:33.298 }, 00:34:33.298 { 00:34:33.298 "name": "BaseBdev4", 00:34:33.298 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:33.298 "is_configured": true, 00:34:33.298 "data_offset": 2048, 00:34:33.298 "data_size": 63488 00:34:33.298 } 00:34:33.298 ] 00:34:33.298 } 00:34:33.298 } 00:34:33.298 }' 00:34:33.298 11:46:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:33.298 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:34:33.298 BaseBdev2 00:34:33.298 BaseBdev3 00:34:33.298 BaseBdev4' 00:34:33.298 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:33.298 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:34:33.298 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:33.557 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:33.557 "name": "NewBaseBdev", 00:34:33.557 "aliases": [ 00:34:33.557 "93e52bd5-4f68-4de3-b67b-a068f57e009a" 00:34:33.557 ], 00:34:33.557 "product_name": "Malloc disk", 00:34:33.557 "block_size": 512, 00:34:33.557 "num_blocks": 65536, 00:34:33.557 "uuid": "93e52bd5-4f68-4de3-b67b-a068f57e009a", 00:34:33.557 "assigned_rate_limits": { 00:34:33.557 "rw_ios_per_sec": 0, 00:34:33.557 "rw_mbytes_per_sec": 0, 00:34:33.557 "r_mbytes_per_sec": 0, 00:34:33.557 "w_mbytes_per_sec": 0 00:34:33.557 }, 00:34:33.557 "claimed": true, 00:34:33.557 "claim_type": "exclusive_write", 00:34:33.557 "zoned": false, 00:34:33.557 "supported_io_types": { 00:34:33.557 "read": true, 00:34:33.557 "write": true, 00:34:33.557 "unmap": true, 00:34:33.557 "flush": true, 00:34:33.557 "reset": true, 00:34:33.557 "nvme_admin": false, 00:34:33.557 "nvme_io": false, 00:34:33.557 "nvme_io_md": false, 00:34:33.557 "write_zeroes": true, 00:34:33.557 "zcopy": true, 00:34:33.557 "get_zone_info": false, 00:34:33.557 "zone_management": false, 00:34:33.557 "zone_append": false, 00:34:33.557 "compare": false, 00:34:33.557 "compare_and_write": false, 00:34:33.557 "abort": true, 00:34:33.557 "seek_hole": false, 00:34:33.557 "seek_data": false, 00:34:33.557 "copy": true, 00:34:33.557 "nvme_iov_md": false 00:34:33.557 }, 00:34:33.557 "memory_domains": [ 00:34:33.557 { 00:34:33.557 "dma_device_id": "system", 00:34:33.557 "dma_device_type": 1 00:34:33.557 }, 00:34:33.557 { 00:34:33.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:33.557 "dma_device_type": 2 00:34:33.557 } 00:34:33.557 ], 00:34:33.557 "driver_specific": {} 00:34:33.557 }' 00:34:33.557 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:33.557 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:33.815 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:33.815 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:33.815 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:33.815 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:33.815 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:33.815 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:33.815 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:33.815 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:33.815 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:34.074 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:34.074 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:34.074 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:34.074 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:34.074 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:34.074 "name": "BaseBdev2", 00:34:34.074 "aliases": [ 00:34:34.074 "d4451abf-1520-4ec6-aa5a-0c13fd07577d" 00:34:34.074 ], 00:34:34.074 "product_name": "Malloc disk", 00:34:34.074 "block_size": 512, 00:34:34.074 "num_blocks": 65536, 00:34:34.074 "uuid": "d4451abf-1520-4ec6-aa5a-0c13fd07577d", 00:34:34.074 "assigned_rate_limits": { 00:34:34.074 "rw_ios_per_sec": 0, 00:34:34.074 "rw_mbytes_per_sec": 0, 00:34:34.074 "r_mbytes_per_sec": 0, 00:34:34.074 "w_mbytes_per_sec": 0 00:34:34.074 }, 00:34:34.074 "claimed": true, 00:34:34.074 "claim_type": "exclusive_write", 00:34:34.075 "zoned": false, 00:34:34.075 "supported_io_types": { 00:34:34.075 "read": true, 00:34:34.075 "write": true, 00:34:34.075 "unmap": true, 00:34:34.075 "flush": true, 00:34:34.075 "reset": true, 00:34:34.075 "nvme_admin": false, 00:34:34.075 "nvme_io": false, 00:34:34.075 "nvme_io_md": false, 00:34:34.075 "write_zeroes": true, 00:34:34.075 "zcopy": true, 00:34:34.075 "get_zone_info": false, 00:34:34.075 "zone_management": false, 00:34:34.075 "zone_append": false, 00:34:34.075 "compare": false, 00:34:34.075 "compare_and_write": false, 00:34:34.075 "abort": true, 00:34:34.075 "seek_hole": false, 00:34:34.075 "seek_data": false, 00:34:34.075 "copy": true, 00:34:34.075 "nvme_iov_md": false 00:34:34.075 }, 00:34:34.075 "memory_domains": [ 00:34:34.075 { 00:34:34.075 "dma_device_id": "system", 00:34:34.075 "dma_device_type": 1 00:34:34.075 }, 00:34:34.075 { 00:34:34.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:34.075 "dma_device_type": 2 00:34:34.075 } 00:34:34.075 ], 00:34:34.075 "driver_specific": {} 00:34:34.075 }' 00:34:34.075 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:34.333 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:34.333 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:34.333 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:34.333 11:46:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:34.333 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:34.333 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:34.333 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:34.592 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:34.592 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:34.592 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:34.592 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:34.592 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:34.592 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:34:34.592 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:34.851 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:34.851 "name": "BaseBdev3", 00:34:34.851 "aliases": [ 00:34:34.851 "2c240978-a5bd-4790-82b2-3c82d53d3dc2" 00:34:34.851 ], 00:34:34.851 "product_name": "Malloc disk", 00:34:34.851 "block_size": 512, 00:34:34.851 "num_blocks": 65536, 00:34:34.851 "uuid": "2c240978-a5bd-4790-82b2-3c82d53d3dc2", 00:34:34.851 "assigned_rate_limits": { 00:34:34.851 "rw_ios_per_sec": 0, 00:34:34.851 "rw_mbytes_per_sec": 0, 00:34:34.851 "r_mbytes_per_sec": 0, 00:34:34.851 "w_mbytes_per_sec": 0 00:34:34.851 }, 00:34:34.851 "claimed": true, 00:34:34.851 "claim_type": "exclusive_write", 00:34:34.851 "zoned": false, 00:34:34.851 "supported_io_types": { 00:34:34.851 "read": true, 00:34:34.851 "write": true, 00:34:34.851 "unmap": true, 00:34:34.851 "flush": true, 00:34:34.851 "reset": true, 00:34:34.851 "nvme_admin": false, 00:34:34.851 "nvme_io": false, 00:34:34.851 "nvme_io_md": false, 00:34:34.851 "write_zeroes": true, 00:34:34.851 "zcopy": true, 00:34:34.851 "get_zone_info": false, 00:34:34.851 "zone_management": false, 00:34:34.851 "zone_append": false, 00:34:34.851 "compare": false, 00:34:34.851 "compare_and_write": false, 00:34:34.851 "abort": true, 00:34:34.851 "seek_hole": false, 00:34:34.851 "seek_data": false, 00:34:34.851 "copy": true, 00:34:34.851 "nvme_iov_md": false 00:34:34.851 }, 00:34:34.851 "memory_domains": [ 00:34:34.851 { 00:34:34.851 "dma_device_id": "system", 00:34:34.851 "dma_device_type": 1 00:34:34.851 }, 00:34:34.851 { 00:34:34.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:34.851 "dma_device_type": 2 00:34:34.851 } 00:34:34.851 ], 00:34:34.851 "driver_specific": {} 00:34:34.851 }' 00:34:34.851 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:34.851 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:34.851 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:34.851 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:34.851 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:35.109 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:35.109 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:35.109 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:35.109 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:35.109 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:35.109 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:35.367 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:35.367 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:35.367 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:34:35.367 11:46:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:35.624 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:35.624 "name": "BaseBdev4", 00:34:35.624 "aliases": [ 00:34:35.624 "5af1ca4f-5d68-420e-8029-9cea2a70abd9" 00:34:35.624 ], 00:34:35.624 "product_name": "Malloc disk", 00:34:35.624 "block_size": 512, 00:34:35.624 "num_blocks": 65536, 00:34:35.624 "uuid": "5af1ca4f-5d68-420e-8029-9cea2a70abd9", 00:34:35.624 "assigned_rate_limits": { 00:34:35.624 "rw_ios_per_sec": 0, 00:34:35.624 "rw_mbytes_per_sec": 0, 00:34:35.624 "r_mbytes_per_sec": 0, 00:34:35.624 "w_mbytes_per_sec": 0 00:34:35.624 }, 00:34:35.624 "claimed": true, 00:34:35.624 "claim_type": "exclusive_write", 00:34:35.624 "zoned": false, 00:34:35.624 "supported_io_types": { 00:34:35.624 "read": true, 00:34:35.624 "write": true, 00:34:35.624 "unmap": true, 00:34:35.624 "flush": true, 00:34:35.625 "reset": true, 00:34:35.625 "nvme_admin": false, 00:34:35.625 "nvme_io": false, 00:34:35.625 "nvme_io_md": false, 00:34:35.625 "write_zeroes": true, 00:34:35.625 "zcopy": true, 00:34:35.625 "get_zone_info": false, 00:34:35.625 "zone_management": false, 00:34:35.625 "zone_append": false, 00:34:35.625 "compare": false, 00:34:35.625 "compare_and_write": false, 00:34:35.625 "abort": true, 00:34:35.625 "seek_hole": false, 00:34:35.625 "seek_data": false, 00:34:35.625 "copy": true, 00:34:35.625 "nvme_iov_md": false 00:34:35.625 }, 00:34:35.625 "memory_domains": [ 00:34:35.625 { 00:34:35.625 "dma_device_id": "system", 00:34:35.625 "dma_device_type": 1 00:34:35.625 }, 00:34:35.625 { 00:34:35.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:35.625 "dma_device_type": 2 00:34:35.625 } 00:34:35.625 ], 00:34:35.625 "driver_specific": {} 00:34:35.625 }' 00:34:35.625 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:35.625 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:35.625 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:35.625 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:35.625 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:35.625 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:35.625 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:35.883 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:35.883 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:35.883 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:35.883 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:35.883 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:35.883 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:36.142 [2024-07-13 11:46:10.834012] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:36.142 [2024-07-13 11:46:10.834044] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:36.142 [2024-07-13 11:46:10.834110] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:36.142 [2024-07-13 11:46:10.834429] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:36.142 [2024-07-13 11:46:10.834451] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name Existed_Raid, state offline 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 156546 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 156546 ']' 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 156546 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 156546 00:34:36.142 killing process with pid 156546 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 156546' 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 156546 00:34:36.142 11:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 156546 00:34:36.142 [2024-07-13 11:46:10.868769] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:36.399 [2024-07-13 11:46:11.120758] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:37.333 ************************************ 00:34:37.333 END TEST raid5f_state_function_test_sb 00:34:37.333 ************************************ 00:34:37.333 11:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:34:37.333 00:34:37.333 real 0m33.234s 00:34:37.333 user 1m2.569s 00:34:37.333 sys 0m3.578s 00:34:37.333 11:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:37.333 11:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:37.333 11:46:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:37.333 11:46:12 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:34:37.333 11:46:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:34:37.333 11:46:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:37.333 11:46:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:37.593 ************************************ 00:34:37.593 START TEST raid5f_superblock_test 00:34:37.593 ************************************ 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid5f 4 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=157661 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 157661 /var/tmp/spdk-raid.sock 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 157661 ']' 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:37.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:37.593 11:46:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:34:37.593 [2024-07-13 11:46:12.171694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:34:37.593 [2024-07-13 11:46:12.172171] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157661 ] 00:34:37.593 [2024-07-13 11:46:12.344847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:37.852 [2024-07-13 11:46:12.578499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.111 [2024-07-13 11:46:12.765078] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:34:38.676 malloc1 00:34:38.676 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:38.938 [2024-07-13 11:46:13.530041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:38.938 [2024-07-13 11:46:13.530277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.938 [2024-07-13 11:46:13.530343] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:34:38.938 [2024-07-13 11:46:13.530712] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.938 [2024-07-13 11:46:13.533053] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.938 [2024-07-13 11:46:13.533212] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:38.938 pt1 00:34:38.938 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:34:38.938 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:38.938 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:34:38.938 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:34:38.938 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:38.938 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:38.938 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:34:38.938 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:38.938 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:34:39.197 malloc2 00:34:39.197 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:39.455 [2024-07-13 11:46:13.960902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:39.455 [2024-07-13 11:46:13.961148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:39.455 [2024-07-13 11:46:13.961285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:34:39.455 [2024-07-13 11:46:13.961396] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:39.455 [2024-07-13 11:46:13.963485] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:39.455 [2024-07-13 11:46:13.963659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:39.455 pt2 00:34:39.455 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:34:39.455 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:39.455 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:34:39.455 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:34:39.455 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:39.455 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:39.455 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:34:39.455 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:39.455 11:46:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:34:39.455 malloc3 00:34:39.455 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:39.713 [2024-07-13 11:46:14.373185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:39.713 [2024-07-13 11:46:14.373406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:39.713 [2024-07-13 11:46:14.373479] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:34:39.713 [2024-07-13 11:46:14.373697] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:39.713 [2024-07-13 11:46:14.375719] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:39.713 [2024-07-13 11:46:14.375861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:39.713 pt3 00:34:39.713 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:34:39.713 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:39.713 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:34:39.713 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:34:39.713 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:34:39.713 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:39.713 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:34:39.713 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:39.713 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:34:39.971 malloc4 00:34:39.971 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:40.229 [2024-07-13 11:46:14.789422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:40.229 [2024-07-13 11:46:14.789640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:40.229 [2024-07-13 11:46:14.789707] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:40.229 [2024-07-13 11:46:14.789999] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:40.229 [2024-07-13 11:46:14.792007] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:40.229 [2024-07-13 11:46:14.792171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:40.229 pt4 00:34:40.229 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:34:40.229 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:34:40.229 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:34:40.229 [2024-07-13 11:46:14.977480] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:40.229 [2024-07-13 11:46:14.979447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:40.229 [2024-07-13 11:46:14.979637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:40.229 [2024-07-13 11:46:14.979730] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:40.229 [2024-07-13 11:46:14.980063] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:34:40.229 [2024-07-13 11:46:14.980174] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:40.229 [2024-07-13 11:46:14.980351] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:34:40.229 [2024-07-13 11:46:14.986583] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:34:40.487 [2024-07-13 11:46:14.986711] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:34:40.487 [2024-07-13 11:46:14.987079] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.487 11:46:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.746 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:40.746 "name": "raid_bdev1", 00:34:40.746 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:40.746 "strip_size_kb": 64, 00:34:40.746 "state": "online", 00:34:40.746 "raid_level": "raid5f", 00:34:40.746 "superblock": true, 00:34:40.746 "num_base_bdevs": 4, 00:34:40.746 "num_base_bdevs_discovered": 4, 00:34:40.746 "num_base_bdevs_operational": 4, 00:34:40.746 "base_bdevs_list": [ 00:34:40.746 { 00:34:40.746 "name": "pt1", 00:34:40.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:40.746 "is_configured": true, 00:34:40.746 "data_offset": 2048, 00:34:40.746 "data_size": 63488 00:34:40.746 }, 00:34:40.746 { 00:34:40.746 "name": "pt2", 00:34:40.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:40.746 "is_configured": true, 00:34:40.746 "data_offset": 2048, 00:34:40.746 "data_size": 63488 00:34:40.746 }, 00:34:40.746 { 00:34:40.746 "name": "pt3", 00:34:40.746 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:40.746 "is_configured": true, 00:34:40.746 "data_offset": 2048, 00:34:40.746 "data_size": 63488 00:34:40.746 }, 00:34:40.746 { 00:34:40.746 "name": "pt4", 00:34:40.746 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:40.746 "is_configured": true, 00:34:40.746 "data_offset": 2048, 00:34:40.746 "data_size": 63488 00:34:40.746 } 00:34:40.746 ] 00:34:40.746 }' 00:34:40.746 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:40.746 11:46:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:41.312 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:34:41.312 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:41.312 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:41.312 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:41.312 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:41.312 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:41.312 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:41.312 11:46:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:41.570 [2024-07-13 11:46:16.118217] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:41.570 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:41.570 "name": "raid_bdev1", 00:34:41.570 "aliases": [ 00:34:41.570 "44912eb9-f196-40ba-8b48-b7c9ddc9e028" 00:34:41.570 ], 00:34:41.570 "product_name": "Raid Volume", 00:34:41.570 "block_size": 512, 00:34:41.570 "num_blocks": 190464, 00:34:41.570 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:41.570 "assigned_rate_limits": { 00:34:41.570 "rw_ios_per_sec": 0, 00:34:41.570 "rw_mbytes_per_sec": 0, 00:34:41.570 "r_mbytes_per_sec": 0, 00:34:41.570 "w_mbytes_per_sec": 0 00:34:41.570 }, 00:34:41.570 "claimed": false, 00:34:41.570 "zoned": false, 00:34:41.570 "supported_io_types": { 00:34:41.570 "read": true, 00:34:41.570 "write": true, 00:34:41.570 "unmap": false, 00:34:41.570 "flush": false, 00:34:41.570 "reset": true, 00:34:41.570 "nvme_admin": false, 00:34:41.570 "nvme_io": false, 00:34:41.570 "nvme_io_md": false, 00:34:41.570 "write_zeroes": true, 00:34:41.570 "zcopy": false, 00:34:41.570 "get_zone_info": false, 00:34:41.570 "zone_management": false, 00:34:41.570 "zone_append": false, 00:34:41.570 "compare": false, 00:34:41.570 "compare_and_write": false, 00:34:41.570 "abort": false, 00:34:41.570 "seek_hole": false, 00:34:41.570 "seek_data": false, 00:34:41.570 "copy": false, 00:34:41.570 "nvme_iov_md": false 00:34:41.570 }, 00:34:41.570 "driver_specific": { 00:34:41.570 "raid": { 00:34:41.570 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:41.570 "strip_size_kb": 64, 00:34:41.570 "state": "online", 00:34:41.570 "raid_level": "raid5f", 00:34:41.570 "superblock": true, 00:34:41.570 "num_base_bdevs": 4, 00:34:41.570 "num_base_bdevs_discovered": 4, 00:34:41.570 "num_base_bdevs_operational": 4, 00:34:41.570 "base_bdevs_list": [ 00:34:41.570 { 00:34:41.570 "name": "pt1", 00:34:41.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:41.570 "is_configured": true, 00:34:41.570 "data_offset": 2048, 00:34:41.570 "data_size": 63488 00:34:41.570 }, 00:34:41.570 { 00:34:41.570 "name": "pt2", 00:34:41.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:41.570 "is_configured": true, 00:34:41.570 "data_offset": 2048, 00:34:41.570 "data_size": 63488 00:34:41.570 }, 00:34:41.570 { 00:34:41.570 "name": "pt3", 00:34:41.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:41.570 "is_configured": true, 00:34:41.570 "data_offset": 2048, 00:34:41.570 "data_size": 63488 00:34:41.570 }, 00:34:41.570 { 00:34:41.570 "name": "pt4", 00:34:41.570 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:41.570 "is_configured": true, 00:34:41.570 "data_offset": 2048, 00:34:41.570 "data_size": 63488 00:34:41.570 } 00:34:41.570 ] 00:34:41.570 } 00:34:41.570 } 00:34:41.570 }' 00:34:41.570 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:41.570 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:41.570 pt2 00:34:41.570 pt3 00:34:41.570 pt4' 00:34:41.570 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:41.570 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:41.570 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:41.829 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:41.829 "name": "pt1", 00:34:41.829 "aliases": [ 00:34:41.829 "00000000-0000-0000-0000-000000000001" 00:34:41.829 ], 00:34:41.829 "product_name": "passthru", 00:34:41.829 "block_size": 512, 00:34:41.829 "num_blocks": 65536, 00:34:41.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:41.829 "assigned_rate_limits": { 00:34:41.829 "rw_ios_per_sec": 0, 00:34:41.829 "rw_mbytes_per_sec": 0, 00:34:41.829 "r_mbytes_per_sec": 0, 00:34:41.829 "w_mbytes_per_sec": 0 00:34:41.829 }, 00:34:41.829 "claimed": true, 00:34:41.829 "claim_type": "exclusive_write", 00:34:41.829 "zoned": false, 00:34:41.829 "supported_io_types": { 00:34:41.829 "read": true, 00:34:41.829 "write": true, 00:34:41.829 "unmap": true, 00:34:41.829 "flush": true, 00:34:41.829 "reset": true, 00:34:41.829 "nvme_admin": false, 00:34:41.829 "nvme_io": false, 00:34:41.829 "nvme_io_md": false, 00:34:41.829 "write_zeroes": true, 00:34:41.829 "zcopy": true, 00:34:41.829 "get_zone_info": false, 00:34:41.829 "zone_management": false, 00:34:41.829 "zone_append": false, 00:34:41.829 "compare": false, 00:34:41.829 "compare_and_write": false, 00:34:41.829 "abort": true, 00:34:41.829 "seek_hole": false, 00:34:41.829 "seek_data": false, 00:34:41.829 "copy": true, 00:34:41.829 "nvme_iov_md": false 00:34:41.829 }, 00:34:41.829 "memory_domains": [ 00:34:41.829 { 00:34:41.829 "dma_device_id": "system", 00:34:41.829 "dma_device_type": 1 00:34:41.829 }, 00:34:41.829 { 00:34:41.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:41.829 "dma_device_type": 2 00:34:41.829 } 00:34:41.829 ], 00:34:41.829 "driver_specific": { 00:34:41.829 "passthru": { 00:34:41.829 "name": "pt1", 00:34:41.829 "base_bdev_name": "malloc1" 00:34:41.829 } 00:34:41.829 } 00:34:41.829 }' 00:34:41.829 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:41.829 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:41.829 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:41.829 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:41.829 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:41.829 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:41.829 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:42.088 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:42.088 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:42.088 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:42.088 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:42.088 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:42.088 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:42.088 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:42.088 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:42.347 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:42.347 "name": "pt2", 00:34:42.347 "aliases": [ 00:34:42.347 "00000000-0000-0000-0000-000000000002" 00:34:42.347 ], 00:34:42.347 "product_name": "passthru", 00:34:42.347 "block_size": 512, 00:34:42.347 "num_blocks": 65536, 00:34:42.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:42.347 "assigned_rate_limits": { 00:34:42.347 "rw_ios_per_sec": 0, 00:34:42.347 "rw_mbytes_per_sec": 0, 00:34:42.347 "r_mbytes_per_sec": 0, 00:34:42.347 "w_mbytes_per_sec": 0 00:34:42.347 }, 00:34:42.347 "claimed": true, 00:34:42.347 "claim_type": "exclusive_write", 00:34:42.347 "zoned": false, 00:34:42.347 "supported_io_types": { 00:34:42.347 "read": true, 00:34:42.347 "write": true, 00:34:42.347 "unmap": true, 00:34:42.347 "flush": true, 00:34:42.347 "reset": true, 00:34:42.347 "nvme_admin": false, 00:34:42.347 "nvme_io": false, 00:34:42.347 "nvme_io_md": false, 00:34:42.347 "write_zeroes": true, 00:34:42.347 "zcopy": true, 00:34:42.347 "get_zone_info": false, 00:34:42.347 "zone_management": false, 00:34:42.347 "zone_append": false, 00:34:42.347 "compare": false, 00:34:42.347 "compare_and_write": false, 00:34:42.347 "abort": true, 00:34:42.347 "seek_hole": false, 00:34:42.347 "seek_data": false, 00:34:42.347 "copy": true, 00:34:42.347 "nvme_iov_md": false 00:34:42.347 }, 00:34:42.347 "memory_domains": [ 00:34:42.347 { 00:34:42.347 "dma_device_id": "system", 00:34:42.347 "dma_device_type": 1 00:34:42.347 }, 00:34:42.347 { 00:34:42.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:42.347 "dma_device_type": 2 00:34:42.347 } 00:34:42.347 ], 00:34:42.347 "driver_specific": { 00:34:42.347 "passthru": { 00:34:42.347 "name": "pt2", 00:34:42.347 "base_bdev_name": "malloc2" 00:34:42.347 } 00:34:42.347 } 00:34:42.347 }' 00:34:42.347 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:42.347 11:46:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:42.347 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:42.347 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:42.347 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:34:42.606 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:42.878 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:42.878 "name": "pt3", 00:34:42.878 "aliases": [ 00:34:42.878 "00000000-0000-0000-0000-000000000003" 00:34:42.878 ], 00:34:42.878 "product_name": "passthru", 00:34:42.878 "block_size": 512, 00:34:42.878 "num_blocks": 65536, 00:34:42.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:42.878 "assigned_rate_limits": { 00:34:42.878 "rw_ios_per_sec": 0, 00:34:42.878 "rw_mbytes_per_sec": 0, 00:34:42.878 "r_mbytes_per_sec": 0, 00:34:42.878 "w_mbytes_per_sec": 0 00:34:42.878 }, 00:34:42.878 "claimed": true, 00:34:42.878 "claim_type": "exclusive_write", 00:34:42.878 "zoned": false, 00:34:42.878 "supported_io_types": { 00:34:42.878 "read": true, 00:34:42.878 "write": true, 00:34:42.878 "unmap": true, 00:34:42.878 "flush": true, 00:34:42.878 "reset": true, 00:34:42.878 "nvme_admin": false, 00:34:42.878 "nvme_io": false, 00:34:42.878 "nvme_io_md": false, 00:34:42.878 "write_zeroes": true, 00:34:42.878 "zcopy": true, 00:34:42.878 "get_zone_info": false, 00:34:42.878 "zone_management": false, 00:34:42.878 "zone_append": false, 00:34:42.878 "compare": false, 00:34:42.878 "compare_and_write": false, 00:34:42.878 "abort": true, 00:34:42.878 "seek_hole": false, 00:34:42.878 "seek_data": false, 00:34:42.878 "copy": true, 00:34:42.878 "nvme_iov_md": false 00:34:42.878 }, 00:34:42.878 "memory_domains": [ 00:34:42.878 { 00:34:42.878 "dma_device_id": "system", 00:34:42.878 "dma_device_type": 1 00:34:42.878 }, 00:34:42.878 { 00:34:42.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:42.878 "dma_device_type": 2 00:34:42.878 } 00:34:42.878 ], 00:34:42.879 "driver_specific": { 00:34:42.879 "passthru": { 00:34:42.879 "name": "pt3", 00:34:42.879 "base_bdev_name": "malloc3" 00:34:42.879 } 00:34:42.879 } 00:34:42.879 }' 00:34:42.879 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:42.879 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:42.879 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:42.879 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:34:43.215 11:46:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:43.495 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:43.495 "name": "pt4", 00:34:43.495 "aliases": [ 00:34:43.495 "00000000-0000-0000-0000-000000000004" 00:34:43.495 ], 00:34:43.495 "product_name": "passthru", 00:34:43.495 "block_size": 512, 00:34:43.495 "num_blocks": 65536, 00:34:43.495 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:43.495 "assigned_rate_limits": { 00:34:43.495 "rw_ios_per_sec": 0, 00:34:43.495 "rw_mbytes_per_sec": 0, 00:34:43.495 "r_mbytes_per_sec": 0, 00:34:43.495 "w_mbytes_per_sec": 0 00:34:43.495 }, 00:34:43.495 "claimed": true, 00:34:43.495 "claim_type": "exclusive_write", 00:34:43.495 "zoned": false, 00:34:43.495 "supported_io_types": { 00:34:43.495 "read": true, 00:34:43.495 "write": true, 00:34:43.495 "unmap": true, 00:34:43.495 "flush": true, 00:34:43.495 "reset": true, 00:34:43.495 "nvme_admin": false, 00:34:43.495 "nvme_io": false, 00:34:43.495 "nvme_io_md": false, 00:34:43.495 "write_zeroes": true, 00:34:43.495 "zcopy": true, 00:34:43.495 "get_zone_info": false, 00:34:43.495 "zone_management": false, 00:34:43.495 "zone_append": false, 00:34:43.495 "compare": false, 00:34:43.495 "compare_and_write": false, 00:34:43.495 "abort": true, 00:34:43.495 "seek_hole": false, 00:34:43.495 "seek_data": false, 00:34:43.495 "copy": true, 00:34:43.495 "nvme_iov_md": false 00:34:43.495 }, 00:34:43.495 "memory_domains": [ 00:34:43.495 { 00:34:43.495 "dma_device_id": "system", 00:34:43.495 "dma_device_type": 1 00:34:43.495 }, 00:34:43.495 { 00:34:43.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:43.495 "dma_device_type": 2 00:34:43.495 } 00:34:43.495 ], 00:34:43.495 "driver_specific": { 00:34:43.495 "passthru": { 00:34:43.495 "name": "pt4", 00:34:43.495 "base_bdev_name": "malloc4" 00:34:43.495 } 00:34:43.495 } 00:34:43.495 }' 00:34:43.495 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:43.495 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:43.763 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:43.763 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:43.763 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:43.763 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:43.763 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:43.763 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:43.763 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:43.763 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:43.763 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:44.021 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:44.021 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:44.021 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:34:44.022 [2024-07-13 11:46:18.759280] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:44.022 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=44912eb9-f196-40ba-8b48-b7c9ddc9e028 00:34:44.022 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 44912eb9-f196-40ba-8b48-b7c9ddc9e028 ']' 00:34:44.022 11:46:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:44.588 [2024-07-13 11:46:19.043133] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:44.588 [2024-07-13 11:46:19.043158] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:44.588 [2024-07-13 11:46:19.043242] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:44.588 [2024-07-13 11:46:19.043332] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:44.588 [2024-07-13 11:46:19.043344] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:34:44.588 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:44.588 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:34:44.588 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:34:44.588 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:34:44.588 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:44.588 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:44.846 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:44.846 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:45.105 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:45.105 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:45.362 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:45.362 11:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:45.362 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:45.362 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:45.621 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:34:45.879 [2024-07-13 11:46:20.507247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:45.879 [2024-07-13 11:46:20.508977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:45.879 [2024-07-13 11:46:20.509063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:45.879 [2024-07-13 11:46:20.509104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:34:45.879 [2024-07-13 11:46:20.509186] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:45.879 [2024-07-13 11:46:20.509277] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:45.879 [2024-07-13 11:46:20.509316] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:34:45.879 [2024-07-13 11:46:20.509351] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:34:45.879 [2024-07-13 11:46:20.509385] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:45.879 [2024-07-13 11:46:20.509396] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state configuring 00:34:45.879 request: 00:34:45.879 { 00:34:45.879 "name": "raid_bdev1", 00:34:45.879 "raid_level": "raid5f", 00:34:45.879 "base_bdevs": [ 00:34:45.879 "malloc1", 00:34:45.879 "malloc2", 00:34:45.879 "malloc3", 00:34:45.879 "malloc4" 00:34:45.879 ], 00:34:45.879 "strip_size_kb": 64, 00:34:45.879 "superblock": false, 00:34:45.879 "method": "bdev_raid_create", 00:34:45.879 "req_id": 1 00:34:45.879 } 00:34:45.879 Got JSON-RPC error response 00:34:45.879 response: 00:34:45.879 { 00:34:45.879 "code": -17, 00:34:45.879 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:45.879 } 00:34:45.879 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:34:45.879 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:45.879 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:45.879 11:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:45.879 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:45.880 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:34:46.139 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:34:46.139 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:34:46.139 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:46.397 [2024-07-13 11:46:20.927252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:46.397 [2024-07-13 11:46:20.927310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:46.397 [2024-07-13 11:46:20.927337] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:46.397 [2024-07-13 11:46:20.927377] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:46.397 [2024-07-13 11:46:20.929513] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:46.397 [2024-07-13 11:46:20.929559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:46.397 [2024-07-13 11:46:20.929643] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:46.397 [2024-07-13 11:46:20.929698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:46.397 pt1 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.397 11:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:46.397 11:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:46.397 "name": "raid_bdev1", 00:34:46.397 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:46.397 "strip_size_kb": 64, 00:34:46.397 "state": "configuring", 00:34:46.397 "raid_level": "raid5f", 00:34:46.397 "superblock": true, 00:34:46.397 "num_base_bdevs": 4, 00:34:46.397 "num_base_bdevs_discovered": 1, 00:34:46.397 "num_base_bdevs_operational": 4, 00:34:46.397 "base_bdevs_list": [ 00:34:46.397 { 00:34:46.397 "name": "pt1", 00:34:46.397 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:46.397 "is_configured": true, 00:34:46.397 "data_offset": 2048, 00:34:46.397 "data_size": 63488 00:34:46.397 }, 00:34:46.397 { 00:34:46.397 "name": null, 00:34:46.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:46.397 "is_configured": false, 00:34:46.397 "data_offset": 2048, 00:34:46.397 "data_size": 63488 00:34:46.397 }, 00:34:46.397 { 00:34:46.397 "name": null, 00:34:46.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:46.397 "is_configured": false, 00:34:46.397 "data_offset": 2048, 00:34:46.398 "data_size": 63488 00:34:46.398 }, 00:34:46.398 { 00:34:46.398 "name": null, 00:34:46.398 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:46.398 "is_configured": false, 00:34:46.398 "data_offset": 2048, 00:34:46.398 "data_size": 63488 00:34:46.398 } 00:34:46.398 ] 00:34:46.398 }' 00:34:46.398 11:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:46.398 11:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:47.332 11:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:34:47.332 11:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:47.332 [2024-07-13 11:46:21.979470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:47.332 [2024-07-13 11:46:21.979538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:47.332 [2024-07-13 11:46:21.979572] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:47.332 [2024-07-13 11:46:21.979603] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:47.332 [2024-07-13 11:46:21.979973] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:47.332 [2024-07-13 11:46:21.980008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:47.332 [2024-07-13 11:46:21.980088] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:47.332 [2024-07-13 11:46:21.980111] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:47.332 pt2 00:34:47.332 11:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:47.591 [2024-07-13 11:46:22.219534] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:47.591 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.849 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:47.849 "name": "raid_bdev1", 00:34:47.849 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:47.849 "strip_size_kb": 64, 00:34:47.849 "state": "configuring", 00:34:47.849 "raid_level": "raid5f", 00:34:47.849 "superblock": true, 00:34:47.849 "num_base_bdevs": 4, 00:34:47.849 "num_base_bdevs_discovered": 1, 00:34:47.849 "num_base_bdevs_operational": 4, 00:34:47.849 "base_bdevs_list": [ 00:34:47.849 { 00:34:47.849 "name": "pt1", 00:34:47.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:47.849 "is_configured": true, 00:34:47.849 "data_offset": 2048, 00:34:47.849 "data_size": 63488 00:34:47.849 }, 00:34:47.849 { 00:34:47.849 "name": null, 00:34:47.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:47.850 "is_configured": false, 00:34:47.850 "data_offset": 2048, 00:34:47.850 "data_size": 63488 00:34:47.850 }, 00:34:47.850 { 00:34:47.850 "name": null, 00:34:47.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:47.850 "is_configured": false, 00:34:47.850 "data_offset": 2048, 00:34:47.850 "data_size": 63488 00:34:47.850 }, 00:34:47.850 { 00:34:47.850 "name": null, 00:34:47.850 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:47.850 "is_configured": false, 00:34:47.850 "data_offset": 2048, 00:34:47.850 "data_size": 63488 00:34:47.850 } 00:34:47.850 ] 00:34:47.850 }' 00:34:47.850 11:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:47.850 11:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:48.415 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:34:48.415 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:48.415 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:48.673 [2024-07-13 11:46:23.327737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:48.673 [2024-07-13 11:46:23.327789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.673 [2024-07-13 11:46:23.327818] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:34:48.673 [2024-07-13 11:46:23.327857] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.673 [2024-07-13 11:46:23.328215] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.673 [2024-07-13 11:46:23.328256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:48.673 [2024-07-13 11:46:23.328331] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:48.673 [2024-07-13 11:46:23.328353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:48.673 pt2 00:34:48.673 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:34:48.673 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:48.673 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:48.931 [2024-07-13 11:46:23.579792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:48.931 [2024-07-13 11:46:23.579844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:48.931 [2024-07-13 11:46:23.579867] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:34:48.931 [2024-07-13 11:46:23.579902] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:48.931 [2024-07-13 11:46:23.580249] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:48.931 [2024-07-13 11:46:23.580288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:48.931 [2024-07-13 11:46:23.580361] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:48.931 [2024-07-13 11:46:23.580383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:48.931 pt3 00:34:48.931 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:34:48.931 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:48.931 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:49.189 [2024-07-13 11:46:23.835829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:49.189 [2024-07-13 11:46:23.835886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:49.189 [2024-07-13 11:46:23.835910] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:34:49.189 [2024-07-13 11:46:23.835950] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:49.189 [2024-07-13 11:46:23.836316] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:49.189 [2024-07-13 11:46:23.836360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:49.189 [2024-07-13 11:46:23.836436] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:34:49.189 [2024-07-13 11:46:23.836482] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:49.189 [2024-07-13 11:46:23.836641] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:34:49.189 [2024-07-13 11:46:23.836664] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:49.189 [2024-07-13 11:46:23.836755] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:34:49.189 [2024-07-13 11:46:23.841769] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:34:49.189 [2024-07-13 11:46:23.841792] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:34:49.189 pt4 00:34:49.189 [2024-07-13 11:46:23.841936] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.189 11:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.447 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:49.447 "name": "raid_bdev1", 00:34:49.447 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:49.447 "strip_size_kb": 64, 00:34:49.447 "state": "online", 00:34:49.447 "raid_level": "raid5f", 00:34:49.447 "superblock": true, 00:34:49.447 "num_base_bdevs": 4, 00:34:49.447 "num_base_bdevs_discovered": 4, 00:34:49.447 "num_base_bdevs_operational": 4, 00:34:49.447 "base_bdevs_list": [ 00:34:49.447 { 00:34:49.447 "name": "pt1", 00:34:49.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:49.447 "is_configured": true, 00:34:49.447 "data_offset": 2048, 00:34:49.447 "data_size": 63488 00:34:49.447 }, 00:34:49.447 { 00:34:49.447 "name": "pt2", 00:34:49.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:49.447 "is_configured": true, 00:34:49.447 "data_offset": 2048, 00:34:49.447 "data_size": 63488 00:34:49.447 }, 00:34:49.447 { 00:34:49.447 "name": "pt3", 00:34:49.447 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:49.447 "is_configured": true, 00:34:49.447 "data_offset": 2048, 00:34:49.447 "data_size": 63488 00:34:49.447 }, 00:34:49.447 { 00:34:49.447 "name": "pt4", 00:34:49.447 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:49.447 "is_configured": true, 00:34:49.447 "data_offset": 2048, 00:34:49.447 "data_size": 63488 00:34:49.447 } 00:34:49.447 ] 00:34:49.447 }' 00:34:49.447 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:49.447 11:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:50.012 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:34:50.012 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:50.012 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:50.012 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:50.012 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:50.012 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:34:50.012 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:50.012 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:50.269 [2024-07-13 11:46:24.868204] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:50.269 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:50.269 "name": "raid_bdev1", 00:34:50.269 "aliases": [ 00:34:50.269 "44912eb9-f196-40ba-8b48-b7c9ddc9e028" 00:34:50.269 ], 00:34:50.269 "product_name": "Raid Volume", 00:34:50.269 "block_size": 512, 00:34:50.269 "num_blocks": 190464, 00:34:50.269 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:50.269 "assigned_rate_limits": { 00:34:50.269 "rw_ios_per_sec": 0, 00:34:50.269 "rw_mbytes_per_sec": 0, 00:34:50.269 "r_mbytes_per_sec": 0, 00:34:50.269 "w_mbytes_per_sec": 0 00:34:50.269 }, 00:34:50.269 "claimed": false, 00:34:50.269 "zoned": false, 00:34:50.269 "supported_io_types": { 00:34:50.269 "read": true, 00:34:50.269 "write": true, 00:34:50.269 "unmap": false, 00:34:50.269 "flush": false, 00:34:50.269 "reset": true, 00:34:50.269 "nvme_admin": false, 00:34:50.269 "nvme_io": false, 00:34:50.269 "nvme_io_md": false, 00:34:50.269 "write_zeroes": true, 00:34:50.269 "zcopy": false, 00:34:50.269 "get_zone_info": false, 00:34:50.269 "zone_management": false, 00:34:50.269 "zone_append": false, 00:34:50.269 "compare": false, 00:34:50.269 "compare_and_write": false, 00:34:50.269 "abort": false, 00:34:50.269 "seek_hole": false, 00:34:50.269 "seek_data": false, 00:34:50.269 "copy": false, 00:34:50.269 "nvme_iov_md": false 00:34:50.269 }, 00:34:50.269 "driver_specific": { 00:34:50.269 "raid": { 00:34:50.269 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:50.269 "strip_size_kb": 64, 00:34:50.269 "state": "online", 00:34:50.269 "raid_level": "raid5f", 00:34:50.269 "superblock": true, 00:34:50.269 "num_base_bdevs": 4, 00:34:50.269 "num_base_bdevs_discovered": 4, 00:34:50.269 "num_base_bdevs_operational": 4, 00:34:50.269 "base_bdevs_list": [ 00:34:50.269 { 00:34:50.269 "name": "pt1", 00:34:50.269 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:50.269 "is_configured": true, 00:34:50.269 "data_offset": 2048, 00:34:50.269 "data_size": 63488 00:34:50.269 }, 00:34:50.269 { 00:34:50.269 "name": "pt2", 00:34:50.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:50.269 "is_configured": true, 00:34:50.269 "data_offset": 2048, 00:34:50.269 "data_size": 63488 00:34:50.269 }, 00:34:50.269 { 00:34:50.269 "name": "pt3", 00:34:50.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:50.269 "is_configured": true, 00:34:50.269 "data_offset": 2048, 00:34:50.269 "data_size": 63488 00:34:50.269 }, 00:34:50.269 { 00:34:50.269 "name": "pt4", 00:34:50.269 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:50.269 "is_configured": true, 00:34:50.269 "data_offset": 2048, 00:34:50.269 "data_size": 63488 00:34:50.269 } 00:34:50.269 ] 00:34:50.269 } 00:34:50.269 } 00:34:50.269 }' 00:34:50.270 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:50.270 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:50.270 pt2 00:34:50.270 pt3 00:34:50.270 pt4' 00:34:50.270 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:50.270 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:50.270 11:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:50.527 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:50.527 "name": "pt1", 00:34:50.527 "aliases": [ 00:34:50.527 "00000000-0000-0000-0000-000000000001" 00:34:50.527 ], 00:34:50.527 "product_name": "passthru", 00:34:50.527 "block_size": 512, 00:34:50.527 "num_blocks": 65536, 00:34:50.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:50.527 "assigned_rate_limits": { 00:34:50.527 "rw_ios_per_sec": 0, 00:34:50.527 "rw_mbytes_per_sec": 0, 00:34:50.527 "r_mbytes_per_sec": 0, 00:34:50.527 "w_mbytes_per_sec": 0 00:34:50.527 }, 00:34:50.527 "claimed": true, 00:34:50.527 "claim_type": "exclusive_write", 00:34:50.527 "zoned": false, 00:34:50.527 "supported_io_types": { 00:34:50.527 "read": true, 00:34:50.527 "write": true, 00:34:50.527 "unmap": true, 00:34:50.527 "flush": true, 00:34:50.527 "reset": true, 00:34:50.527 "nvme_admin": false, 00:34:50.527 "nvme_io": false, 00:34:50.527 "nvme_io_md": false, 00:34:50.527 "write_zeroes": true, 00:34:50.527 "zcopy": true, 00:34:50.527 "get_zone_info": false, 00:34:50.527 "zone_management": false, 00:34:50.527 "zone_append": false, 00:34:50.527 "compare": false, 00:34:50.527 "compare_and_write": false, 00:34:50.527 "abort": true, 00:34:50.527 "seek_hole": false, 00:34:50.527 "seek_data": false, 00:34:50.527 "copy": true, 00:34:50.527 "nvme_iov_md": false 00:34:50.527 }, 00:34:50.527 "memory_domains": [ 00:34:50.527 { 00:34:50.527 "dma_device_id": "system", 00:34:50.527 "dma_device_type": 1 00:34:50.527 }, 00:34:50.527 { 00:34:50.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:50.527 "dma_device_type": 2 00:34:50.527 } 00:34:50.527 ], 00:34:50.527 "driver_specific": { 00:34:50.527 "passthru": { 00:34:50.527 "name": "pt1", 00:34:50.527 "base_bdev_name": "malloc1" 00:34:50.527 } 00:34:50.527 } 00:34:50.527 }' 00:34:50.527 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:50.527 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:50.785 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:50.785 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:50.785 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:50.785 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:50.785 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:50.785 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:50.785 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:50.785 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.043 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.043 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:51.043 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:51.043 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:51.043 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:51.301 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:51.301 "name": "pt2", 00:34:51.301 "aliases": [ 00:34:51.301 "00000000-0000-0000-0000-000000000002" 00:34:51.301 ], 00:34:51.301 "product_name": "passthru", 00:34:51.301 "block_size": 512, 00:34:51.301 "num_blocks": 65536, 00:34:51.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:51.301 "assigned_rate_limits": { 00:34:51.301 "rw_ios_per_sec": 0, 00:34:51.301 "rw_mbytes_per_sec": 0, 00:34:51.301 "r_mbytes_per_sec": 0, 00:34:51.301 "w_mbytes_per_sec": 0 00:34:51.301 }, 00:34:51.301 "claimed": true, 00:34:51.301 "claim_type": "exclusive_write", 00:34:51.301 "zoned": false, 00:34:51.301 "supported_io_types": { 00:34:51.301 "read": true, 00:34:51.301 "write": true, 00:34:51.301 "unmap": true, 00:34:51.301 "flush": true, 00:34:51.301 "reset": true, 00:34:51.301 "nvme_admin": false, 00:34:51.301 "nvme_io": false, 00:34:51.301 "nvme_io_md": false, 00:34:51.301 "write_zeroes": true, 00:34:51.301 "zcopy": true, 00:34:51.301 "get_zone_info": false, 00:34:51.301 "zone_management": false, 00:34:51.301 "zone_append": false, 00:34:51.301 "compare": false, 00:34:51.301 "compare_and_write": false, 00:34:51.301 "abort": true, 00:34:51.301 "seek_hole": false, 00:34:51.301 "seek_data": false, 00:34:51.301 "copy": true, 00:34:51.301 "nvme_iov_md": false 00:34:51.301 }, 00:34:51.301 "memory_domains": [ 00:34:51.301 { 00:34:51.301 "dma_device_id": "system", 00:34:51.301 "dma_device_type": 1 00:34:51.301 }, 00:34:51.301 { 00:34:51.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:51.301 "dma_device_type": 2 00:34:51.301 } 00:34:51.301 ], 00:34:51.301 "driver_specific": { 00:34:51.301 "passthru": { 00:34:51.301 "name": "pt2", 00:34:51.301 "base_bdev_name": "malloc2" 00:34:51.301 } 00:34:51.301 } 00:34:51.301 }' 00:34:51.301 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.301 11:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:51.301 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:51.301 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.560 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:51.560 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:51.560 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.560 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:51.560 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:51.560 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.560 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:51.819 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:51.819 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:51.819 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:34:51.819 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:52.077 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:52.077 "name": "pt3", 00:34:52.077 "aliases": [ 00:34:52.077 "00000000-0000-0000-0000-000000000003" 00:34:52.077 ], 00:34:52.077 "product_name": "passthru", 00:34:52.077 "block_size": 512, 00:34:52.077 "num_blocks": 65536, 00:34:52.077 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:52.077 "assigned_rate_limits": { 00:34:52.077 "rw_ios_per_sec": 0, 00:34:52.077 "rw_mbytes_per_sec": 0, 00:34:52.077 "r_mbytes_per_sec": 0, 00:34:52.077 "w_mbytes_per_sec": 0 00:34:52.077 }, 00:34:52.077 "claimed": true, 00:34:52.077 "claim_type": "exclusive_write", 00:34:52.077 "zoned": false, 00:34:52.077 "supported_io_types": { 00:34:52.077 "read": true, 00:34:52.077 "write": true, 00:34:52.077 "unmap": true, 00:34:52.077 "flush": true, 00:34:52.077 "reset": true, 00:34:52.077 "nvme_admin": false, 00:34:52.077 "nvme_io": false, 00:34:52.077 "nvme_io_md": false, 00:34:52.077 "write_zeroes": true, 00:34:52.077 "zcopy": true, 00:34:52.077 "get_zone_info": false, 00:34:52.077 "zone_management": false, 00:34:52.077 "zone_append": false, 00:34:52.077 "compare": false, 00:34:52.077 "compare_and_write": false, 00:34:52.077 "abort": true, 00:34:52.077 "seek_hole": false, 00:34:52.077 "seek_data": false, 00:34:52.077 "copy": true, 00:34:52.077 "nvme_iov_md": false 00:34:52.077 }, 00:34:52.077 "memory_domains": [ 00:34:52.077 { 00:34:52.077 "dma_device_id": "system", 00:34:52.077 "dma_device_type": 1 00:34:52.077 }, 00:34:52.077 { 00:34:52.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.077 "dma_device_type": 2 00:34:52.077 } 00:34:52.077 ], 00:34:52.077 "driver_specific": { 00:34:52.077 "passthru": { 00:34:52.077 "name": "pt3", 00:34:52.077 "base_bdev_name": "malloc3" 00:34:52.077 } 00:34:52.077 } 00:34:52.077 }' 00:34:52.077 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:52.077 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:52.077 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:52.077 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.077 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.335 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:52.335 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.335 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.335 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:52.335 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.335 11:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:52.335 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:52.335 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:52.335 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:34:52.335 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:52.593 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:52.593 "name": "pt4", 00:34:52.593 "aliases": [ 00:34:52.593 "00000000-0000-0000-0000-000000000004" 00:34:52.593 ], 00:34:52.593 "product_name": "passthru", 00:34:52.593 "block_size": 512, 00:34:52.593 "num_blocks": 65536, 00:34:52.593 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:52.593 "assigned_rate_limits": { 00:34:52.593 "rw_ios_per_sec": 0, 00:34:52.593 "rw_mbytes_per_sec": 0, 00:34:52.593 "r_mbytes_per_sec": 0, 00:34:52.593 "w_mbytes_per_sec": 0 00:34:52.593 }, 00:34:52.593 "claimed": true, 00:34:52.593 "claim_type": "exclusive_write", 00:34:52.593 "zoned": false, 00:34:52.593 "supported_io_types": { 00:34:52.593 "read": true, 00:34:52.593 "write": true, 00:34:52.593 "unmap": true, 00:34:52.593 "flush": true, 00:34:52.593 "reset": true, 00:34:52.593 "nvme_admin": false, 00:34:52.593 "nvme_io": false, 00:34:52.593 "nvme_io_md": false, 00:34:52.593 "write_zeroes": true, 00:34:52.593 "zcopy": true, 00:34:52.593 "get_zone_info": false, 00:34:52.593 "zone_management": false, 00:34:52.593 "zone_append": false, 00:34:52.593 "compare": false, 00:34:52.593 "compare_and_write": false, 00:34:52.593 "abort": true, 00:34:52.593 "seek_hole": false, 00:34:52.593 "seek_data": false, 00:34:52.593 "copy": true, 00:34:52.593 "nvme_iov_md": false 00:34:52.593 }, 00:34:52.593 "memory_domains": [ 00:34:52.593 { 00:34:52.593 "dma_device_id": "system", 00:34:52.593 "dma_device_type": 1 00:34:52.593 }, 00:34:52.593 { 00:34:52.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:52.593 "dma_device_type": 2 00:34:52.593 } 00:34:52.593 ], 00:34:52.593 "driver_specific": { 00:34:52.593 "passthru": { 00:34:52.593 "name": "pt4", 00:34:52.593 "base_bdev_name": "malloc4" 00:34:52.593 } 00:34:52.593 } 00:34:52.593 }' 00:34:52.593 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:52.852 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:52.852 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:34:52.852 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.852 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:52.852 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:52.852 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:52.852 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:53.111 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:53.111 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:53.111 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:53.111 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:53.111 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:53.111 11:46:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:34:53.369 [2024-07-13 11:46:27.992883] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:53.369 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 44912eb9-f196-40ba-8b48-b7c9ddc9e028 '!=' 44912eb9-f196-40ba-8b48-b7c9ddc9e028 ']' 00:34:53.369 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:34:53.369 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:53.369 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:34:53.369 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:53.628 [2024-07-13 11:46:28.192785] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.628 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:53.887 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:53.887 "name": "raid_bdev1", 00:34:53.887 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:53.887 "strip_size_kb": 64, 00:34:53.887 "state": "online", 00:34:53.887 "raid_level": "raid5f", 00:34:53.887 "superblock": true, 00:34:53.887 "num_base_bdevs": 4, 00:34:53.887 "num_base_bdevs_discovered": 3, 00:34:53.887 "num_base_bdevs_operational": 3, 00:34:53.887 "base_bdevs_list": [ 00:34:53.887 { 00:34:53.887 "name": null, 00:34:53.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.887 "is_configured": false, 00:34:53.887 "data_offset": 2048, 00:34:53.887 "data_size": 63488 00:34:53.887 }, 00:34:53.887 { 00:34:53.887 "name": "pt2", 00:34:53.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:53.887 "is_configured": true, 00:34:53.887 "data_offset": 2048, 00:34:53.887 "data_size": 63488 00:34:53.887 }, 00:34:53.887 { 00:34:53.887 "name": "pt3", 00:34:53.887 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:53.887 "is_configured": true, 00:34:53.887 "data_offset": 2048, 00:34:53.887 "data_size": 63488 00:34:53.887 }, 00:34:53.887 { 00:34:53.887 "name": "pt4", 00:34:53.887 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:53.887 "is_configured": true, 00:34:53.887 "data_offset": 2048, 00:34:53.887 "data_size": 63488 00:34:53.887 } 00:34:53.887 ] 00:34:53.887 }' 00:34:53.887 11:46:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:53.887 11:46:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:54.454 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:54.713 [2024-07-13 11:46:29.316972] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:54.713 [2024-07-13 11:46:29.317004] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:54.713 [2024-07-13 11:46:29.317078] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:54.714 [2024-07-13 11:46:29.317159] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:54.714 [2024-07-13 11:46:29.317171] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:34:54.714 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:54.714 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:34:54.972 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:34:54.972 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:34:54.972 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:34:54.972 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:34:54.972 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:55.230 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:34:55.230 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:34:55.230 11:46:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:34:55.488 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:34:55.488 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:34:55.488 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:55.747 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:34:55.747 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:34:55.747 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:34:55.747 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:34:55.747 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:56.006 [2024-07-13 11:46:30.521203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:56.006 [2024-07-13 11:46:30.521308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:56.006 [2024-07-13 11:46:30.521344] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:34:56.006 [2024-07-13 11:46:30.521388] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:56.006 [2024-07-13 11:46:30.523814] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:56.006 [2024-07-13 11:46:30.523862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:56.006 [2024-07-13 11:46:30.523989] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:56.006 [2024-07-13 11:46:30.524052] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:56.006 pt2 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:56.006 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:56.265 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:56.265 "name": "raid_bdev1", 00:34:56.265 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:56.265 "strip_size_kb": 64, 00:34:56.265 "state": "configuring", 00:34:56.265 "raid_level": "raid5f", 00:34:56.265 "superblock": true, 00:34:56.265 "num_base_bdevs": 4, 00:34:56.265 "num_base_bdevs_discovered": 1, 00:34:56.265 "num_base_bdevs_operational": 3, 00:34:56.265 "base_bdevs_list": [ 00:34:56.265 { 00:34:56.265 "name": null, 00:34:56.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.265 "is_configured": false, 00:34:56.265 "data_offset": 2048, 00:34:56.265 "data_size": 63488 00:34:56.265 }, 00:34:56.265 { 00:34:56.265 "name": "pt2", 00:34:56.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:56.265 "is_configured": true, 00:34:56.265 "data_offset": 2048, 00:34:56.265 "data_size": 63488 00:34:56.265 }, 00:34:56.265 { 00:34:56.265 "name": null, 00:34:56.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:56.265 "is_configured": false, 00:34:56.265 "data_offset": 2048, 00:34:56.265 "data_size": 63488 00:34:56.265 }, 00:34:56.265 { 00:34:56.265 "name": null, 00:34:56.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:56.265 "is_configured": false, 00:34:56.265 "data_offset": 2048, 00:34:56.265 "data_size": 63488 00:34:56.265 } 00:34:56.265 ] 00:34:56.265 }' 00:34:56.265 11:46:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:56.265 11:46:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.832 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:34:56.832 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:34:56.832 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:57.091 [2024-07-13 11:46:31.613367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:57.091 [2024-07-13 11:46:31.613464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:57.091 [2024-07-13 11:46:31.613502] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:34:57.091 [2024-07-13 11:46:31.613543] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:57.091 [2024-07-13 11:46:31.614110] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:57.091 [2024-07-13 11:46:31.614174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:57.091 [2024-07-13 11:46:31.614272] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:57.091 [2024-07-13 11:46:31.614301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:57.091 pt3 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:57.091 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:57.091 "name": "raid_bdev1", 00:34:57.092 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:57.092 "strip_size_kb": 64, 00:34:57.092 "state": "configuring", 00:34:57.092 "raid_level": "raid5f", 00:34:57.092 "superblock": true, 00:34:57.092 "num_base_bdevs": 4, 00:34:57.092 "num_base_bdevs_discovered": 2, 00:34:57.092 "num_base_bdevs_operational": 3, 00:34:57.092 "base_bdevs_list": [ 00:34:57.092 { 00:34:57.092 "name": null, 00:34:57.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.092 "is_configured": false, 00:34:57.092 "data_offset": 2048, 00:34:57.092 "data_size": 63488 00:34:57.092 }, 00:34:57.092 { 00:34:57.092 "name": "pt2", 00:34:57.092 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:57.092 "is_configured": true, 00:34:57.092 "data_offset": 2048, 00:34:57.092 "data_size": 63488 00:34:57.092 }, 00:34:57.092 { 00:34:57.092 "name": "pt3", 00:34:57.092 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:57.092 "is_configured": true, 00:34:57.092 "data_offset": 2048, 00:34:57.092 "data_size": 63488 00:34:57.092 }, 00:34:57.092 { 00:34:57.092 "name": null, 00:34:57.092 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:57.092 "is_configured": false, 00:34:57.092 "data_offset": 2048, 00:34:57.092 "data_size": 63488 00:34:57.092 } 00:34:57.092 ] 00:34:57.092 }' 00:34:57.092 11:46:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:57.092 11:46:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:34:58.027 [2024-07-13 11:46:32.637701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:34:58.027 [2024-07-13 11:46:32.638160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:58.027 [2024-07-13 11:46:32.638312] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:34:58.027 [2024-07-13 11:46:32.638428] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:58.027 [2024-07-13 11:46:32.639084] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:58.027 [2024-07-13 11:46:32.639258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:34:58.027 [2024-07-13 11:46:32.639452] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:34:58.027 [2024-07-13 11:46:32.639486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:34:58.027 [2024-07-13 11:46:32.639633] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:34:58.027 [2024-07-13 11:46:32.639656] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:58.027 [2024-07-13 11:46:32.639752] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:34:58.027 [2024-07-13 11:46:32.645033] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:34:58.027 [2024-07-13 11:46:32.645056] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:34:58.027 pt4 00:34:58.027 [2024-07-13 11:46:32.645319] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:58.027 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:58.028 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.028 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:58.285 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:58.285 "name": "raid_bdev1", 00:34:58.285 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:34:58.285 "strip_size_kb": 64, 00:34:58.285 "state": "online", 00:34:58.285 "raid_level": "raid5f", 00:34:58.285 "superblock": true, 00:34:58.285 "num_base_bdevs": 4, 00:34:58.285 "num_base_bdevs_discovered": 3, 00:34:58.285 "num_base_bdevs_operational": 3, 00:34:58.285 "base_bdevs_list": [ 00:34:58.285 { 00:34:58.285 "name": null, 00:34:58.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.285 "is_configured": false, 00:34:58.285 "data_offset": 2048, 00:34:58.285 "data_size": 63488 00:34:58.285 }, 00:34:58.285 { 00:34:58.285 "name": "pt2", 00:34:58.285 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:58.285 "is_configured": true, 00:34:58.285 "data_offset": 2048, 00:34:58.285 "data_size": 63488 00:34:58.285 }, 00:34:58.285 { 00:34:58.285 "name": "pt3", 00:34:58.285 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:58.285 "is_configured": true, 00:34:58.285 "data_offset": 2048, 00:34:58.285 "data_size": 63488 00:34:58.286 }, 00:34:58.286 { 00:34:58.286 "name": "pt4", 00:34:58.286 "uuid": "00000000-0000-0000-0000-000000000004", 00:34:58.286 "is_configured": true, 00:34:58.286 "data_offset": 2048, 00:34:58.286 "data_size": 63488 00:34:58.286 } 00:34:58.286 ] 00:34:58.286 }' 00:34:58.286 11:46:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:58.286 11:46:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.852 11:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:59.110 [2024-07-13 11:46:33.832156] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:59.110 [2024-07-13 11:46:33.832187] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:59.110 [2024-07-13 11:46:33.832261] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:59.110 [2024-07-13 11:46:33.832342] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:59.110 [2024-07-13 11:46:33.832353] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:34:59.110 11:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:59.110 11:46:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:34:59.368 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:34:59.368 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:34:59.368 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:34:59.368 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:34:59.368 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:34:59.625 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:59.883 [2024-07-13 11:46:34.468257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:59.883 [2024-07-13 11:46:34.468696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:59.883 [2024-07-13 11:46:34.468825] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:34:59.883 [2024-07-13 11:46:34.468966] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:59.883 [2024-07-13 11:46:34.471225] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:59.883 [2024-07-13 11:46:34.471387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:59.883 [2024-07-13 11:46:34.471566] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:59.883 [2024-07-13 11:46:34.471628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:59.883 [2024-07-13 11:46:34.471768] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:59.883 [2024-07-13 11:46:34.471783] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:59.883 [2024-07-13 11:46:34.471806] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state configuring 00:34:59.883 [2024-07-13 11:46:34.471865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:59.883 [2024-07-13 11:46:34.471970] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:59.883 pt1 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:59.883 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.141 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:00.141 "name": "raid_bdev1", 00:35:00.141 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:35:00.141 "strip_size_kb": 64, 00:35:00.141 "state": "configuring", 00:35:00.141 "raid_level": "raid5f", 00:35:00.141 "superblock": true, 00:35:00.141 "num_base_bdevs": 4, 00:35:00.141 "num_base_bdevs_discovered": 2, 00:35:00.141 "num_base_bdevs_operational": 3, 00:35:00.141 "base_bdevs_list": [ 00:35:00.141 { 00:35:00.141 "name": null, 00:35:00.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:00.141 "is_configured": false, 00:35:00.141 "data_offset": 2048, 00:35:00.141 "data_size": 63488 00:35:00.141 }, 00:35:00.141 { 00:35:00.141 "name": "pt2", 00:35:00.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:00.141 "is_configured": true, 00:35:00.141 "data_offset": 2048, 00:35:00.141 "data_size": 63488 00:35:00.141 }, 00:35:00.141 { 00:35:00.141 "name": "pt3", 00:35:00.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:00.141 "is_configured": true, 00:35:00.141 "data_offset": 2048, 00:35:00.141 "data_size": 63488 00:35:00.141 }, 00:35:00.141 { 00:35:00.142 "name": null, 00:35:00.142 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:00.142 "is_configured": false, 00:35:00.142 "data_offset": 2048, 00:35:00.142 "data_size": 63488 00:35:00.142 } 00:35:00.142 ] 00:35:00.142 }' 00:35:00.142 11:46:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:00.142 11:46:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.708 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:35:00.708 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:00.967 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:35:00.967 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:01.226 [2024-07-13 11:46:35.788531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:01.226 [2024-07-13 11:46:35.788925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:01.226 [2024-07-13 11:46:35.789064] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:35:01.226 [2024-07-13 11:46:35.789225] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:01.226 [2024-07-13 11:46:35.789751] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:01.226 [2024-07-13 11:46:35.789885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:01.226 [2024-07-13 11:46:35.790058] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:01.226 [2024-07-13 11:46:35.790090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:01.226 [2024-07-13 11:46:35.790225] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:35:01.226 [2024-07-13 11:46:35.790238] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:01.226 [2024-07-13 11:46:35.790334] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:35:01.226 [2024-07-13 11:46:35.795797] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:35:01.226 [2024-07-13 11:46:35.795820] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:35:01.226 [2024-07-13 11:46:35.796038] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:01.226 pt4 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.226 11:46:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:01.484 11:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:01.484 "name": "raid_bdev1", 00:35:01.484 "uuid": "44912eb9-f196-40ba-8b48-b7c9ddc9e028", 00:35:01.484 "strip_size_kb": 64, 00:35:01.484 "state": "online", 00:35:01.484 "raid_level": "raid5f", 00:35:01.484 "superblock": true, 00:35:01.484 "num_base_bdevs": 4, 00:35:01.484 "num_base_bdevs_discovered": 3, 00:35:01.484 "num_base_bdevs_operational": 3, 00:35:01.484 "base_bdevs_list": [ 00:35:01.484 { 00:35:01.484 "name": null, 00:35:01.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.484 "is_configured": false, 00:35:01.484 "data_offset": 2048, 00:35:01.484 "data_size": 63488 00:35:01.484 }, 00:35:01.484 { 00:35:01.484 "name": "pt2", 00:35:01.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:01.484 "is_configured": true, 00:35:01.484 "data_offset": 2048, 00:35:01.484 "data_size": 63488 00:35:01.484 }, 00:35:01.484 { 00:35:01.484 "name": "pt3", 00:35:01.484 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:01.484 "is_configured": true, 00:35:01.484 "data_offset": 2048, 00:35:01.484 "data_size": 63488 00:35:01.484 }, 00:35:01.484 { 00:35:01.484 "name": "pt4", 00:35:01.484 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:01.484 "is_configured": true, 00:35:01.484 "data_offset": 2048, 00:35:01.484 "data_size": 63488 00:35:01.484 } 00:35:01.484 ] 00:35:01.484 }' 00:35:01.484 11:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:01.484 11:46:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.051 11:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:35:02.051 11:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:02.309 11:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:35:02.309 11:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:02.309 11:46:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:35:02.309 [2024-07-13 11:46:37.054880] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 44912eb9-f196-40ba-8b48-b7c9ddc9e028 '!=' 44912eb9-f196-40ba-8b48-b7c9ddc9e028 ']' 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 157661 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 157661 ']' 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # kill -0 157661 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # uname 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157661 00:35:02.568 killing process with pid 157661 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157661' 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@967 -- # kill 157661 00:35:02.568 11:46:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # wait 157661 00:35:02.568 [2024-07-13 11:46:37.088317] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:02.568 [2024-07-13 11:46:37.088395] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:02.568 [2024-07-13 11:46:37.088522] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:02.568 [2024-07-13 11:46:37.088545] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:35:02.826 [2024-07-13 11:46:37.342715] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:03.762 ************************************ 00:35:03.762 END TEST raid5f_superblock_test 00:35:03.762 ************************************ 00:35:03.762 11:46:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:35:03.762 00:35:03.762 real 0m26.155s 00:35:03.762 user 0m49.122s 00:35:03.762 sys 0m2.878s 00:35:03.762 11:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:03.762 11:46:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.762 11:46:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:35:03.762 11:46:38 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:35:03.762 11:46:38 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:35:03.762 11:46:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:35:03.762 11:46:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:03.762 11:46:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:03.762 ************************************ 00:35:03.762 START TEST raid5f_rebuild_test 00:35:03.762 ************************************ 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 false false true 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=158540 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 158540 /var/tmp/spdk-raid.sock 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 158540 ']' 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:03.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:03.762 11:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.762 [2024-07-13 11:46:38.389206] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:03.762 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:03.762 Zero copy mechanism will not be used. 00:35:03.762 [2024-07-13 11:46:38.389402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158540 ] 00:35:04.021 [2024-07-13 11:46:38.568631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.280 [2024-07-13 11:46:38.818681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.280 [2024-07-13 11:46:39.004720] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:04.847 11:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:04.847 11:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:35:04.847 11:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:04.847 11:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:04.847 BaseBdev1_malloc 00:35:04.847 11:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:05.105 [2024-07-13 11:46:39.736397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:05.105 [2024-07-13 11:46:39.736504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:05.105 [2024-07-13 11:46:39.736544] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:35:05.105 [2024-07-13 11:46:39.736566] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:05.105 [2024-07-13 11:46:39.738779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:05.105 [2024-07-13 11:46:39.738825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:05.105 BaseBdev1 00:35:05.105 11:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:05.105 11:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:05.364 BaseBdev2_malloc 00:35:05.364 11:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:05.622 [2024-07-13 11:46:40.158139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:05.622 [2024-07-13 11:46:40.158240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:05.622 [2024-07-13 11:46:40.158284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:35:05.622 [2024-07-13 11:46:40.158306] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:05.622 [2024-07-13 11:46:40.160511] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:05.622 [2024-07-13 11:46:40.160558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:05.622 BaseBdev2 00:35:05.622 11:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:05.622 11:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:05.622 BaseBdev3_malloc 00:35:05.622 11:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:05.880 [2024-07-13 11:46:40.550800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:05.880 [2024-07-13 11:46:40.550896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:05.880 [2024-07-13 11:46:40.550934] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:35:05.880 [2024-07-13 11:46:40.550962] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:05.880 [2024-07-13 11:46:40.553124] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:05.880 [2024-07-13 11:46:40.553176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:05.880 BaseBdev3 00:35:05.880 11:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:05.880 11:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:06.139 BaseBdev4_malloc 00:35:06.139 11:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:06.397 [2024-07-13 11:46:40.948222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:06.397 [2024-07-13 11:46:40.948301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:06.397 [2024-07-13 11:46:40.948338] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:06.397 [2024-07-13 11:46:40.948365] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:06.397 [2024-07-13 11:46:40.950520] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:06.397 [2024-07-13 11:46:40.950569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:06.397 BaseBdev4 00:35:06.397 11:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:35:06.656 spare_malloc 00:35:06.656 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:06.656 spare_delay 00:35:06.656 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:06.915 [2024-07-13 11:46:41.537652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:06.915 [2024-07-13 11:46:41.537735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:06.915 [2024-07-13 11:46:41.537770] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:35:06.915 [2024-07-13 11:46:41.537802] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:06.915 [2024-07-13 11:46:41.540022] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:06.915 [2024-07-13 11:46:41.540074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:06.915 spare 00:35:06.915 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:35:07.173 [2024-07-13 11:46:41.725752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:07.173 [2024-07-13 11:46:41.727624] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:07.173 [2024-07-13 11:46:41.727712] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:07.173 [2024-07-13 11:46:41.727767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:07.173 [2024-07-13 11:46:41.727851] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:35:07.173 [2024-07-13 11:46:41.727873] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:35:07.173 [2024-07-13 11:46:41.728010] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:07.173 [2024-07-13 11:46:41.733420] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:35:07.173 [2024-07-13 11:46:41.733444] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:35:07.173 [2024-07-13 11:46:41.733630] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.173 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:07.432 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:07.432 "name": "raid_bdev1", 00:35:07.432 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:07.432 "strip_size_kb": 64, 00:35:07.432 "state": "online", 00:35:07.432 "raid_level": "raid5f", 00:35:07.432 "superblock": false, 00:35:07.432 "num_base_bdevs": 4, 00:35:07.432 "num_base_bdevs_discovered": 4, 00:35:07.432 "num_base_bdevs_operational": 4, 00:35:07.432 "base_bdevs_list": [ 00:35:07.432 { 00:35:07.432 "name": "BaseBdev1", 00:35:07.432 "uuid": "7dc967de-ff99-5419-9870-2d477d73384a", 00:35:07.432 "is_configured": true, 00:35:07.432 "data_offset": 0, 00:35:07.432 "data_size": 65536 00:35:07.432 }, 00:35:07.432 { 00:35:07.432 "name": "BaseBdev2", 00:35:07.432 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:07.432 "is_configured": true, 00:35:07.432 "data_offset": 0, 00:35:07.432 "data_size": 65536 00:35:07.432 }, 00:35:07.432 { 00:35:07.432 "name": "BaseBdev3", 00:35:07.432 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:07.432 "is_configured": true, 00:35:07.432 "data_offset": 0, 00:35:07.432 "data_size": 65536 00:35:07.432 }, 00:35:07.432 { 00:35:07.432 "name": "BaseBdev4", 00:35:07.432 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:07.432 "is_configured": true, 00:35:07.432 "data_offset": 0, 00:35:07.432 "data_size": 65536 00:35:07.432 } 00:35:07.432 ] 00:35:07.432 }' 00:35:07.432 11:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:07.432 11:46:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.999 11:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:07.999 11:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:08.261 [2024-07-13 11:46:42.856282] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:08.261 11:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=196608 00:35:08.261 11:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.261 11:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:08.522 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:08.780 [2024-07-13 11:46:43.308264] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:08.780 /dev/nbd0 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:08.780 1+0 records in 00:35:08.780 1+0 records out 00:35:08.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180634 s, 22.7 MB/s 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 192 00:35:08.780 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:35:09.131 512+0 records in 00:35:09.131 512+0 records out 00:35:09.131 100663296 bytes (101 MB, 96 MiB) copied, 0.499282 s, 202 MB/s 00:35:09.131 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:09.131 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:09.131 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:09.131 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:09.131 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:09.131 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:09.131 11:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:09.436 [2024-07-13 11:46:44.079082] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:09.436 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:09.732 [2024-07-13 11:46:44.354144] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.732 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:09.990 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:09.990 "name": "raid_bdev1", 00:35:09.990 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:09.990 "strip_size_kb": 64, 00:35:09.990 "state": "online", 00:35:09.990 "raid_level": "raid5f", 00:35:09.990 "superblock": false, 00:35:09.990 "num_base_bdevs": 4, 00:35:09.990 "num_base_bdevs_discovered": 3, 00:35:09.990 "num_base_bdevs_operational": 3, 00:35:09.990 "base_bdevs_list": [ 00:35:09.990 { 00:35:09.990 "name": null, 00:35:09.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.990 "is_configured": false, 00:35:09.990 "data_offset": 0, 00:35:09.990 "data_size": 65536 00:35:09.990 }, 00:35:09.990 { 00:35:09.990 "name": "BaseBdev2", 00:35:09.990 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:09.990 "is_configured": true, 00:35:09.990 "data_offset": 0, 00:35:09.990 "data_size": 65536 00:35:09.991 }, 00:35:09.991 { 00:35:09.991 "name": "BaseBdev3", 00:35:09.991 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:09.991 "is_configured": true, 00:35:09.991 "data_offset": 0, 00:35:09.991 "data_size": 65536 00:35:09.991 }, 00:35:09.991 { 00:35:09.991 "name": "BaseBdev4", 00:35:09.991 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:09.991 "is_configured": true, 00:35:09.991 "data_offset": 0, 00:35:09.991 "data_size": 65536 00:35:09.991 } 00:35:09.991 ] 00:35:09.991 }' 00:35:09.991 11:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:09.991 11:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:10.557 11:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:10.815 [2024-07-13 11:46:45.474351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:10.815 [2024-07-13 11:46:45.484823] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d7d0 00:35:10.815 [2024-07-13 11:46:45.491851] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:10.815 11:46:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:35:11.752 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:11.752 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:11.752 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:11.752 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:11.752 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:11.752 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:11.752 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:12.011 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:12.011 "name": "raid_bdev1", 00:35:12.011 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:12.011 "strip_size_kb": 64, 00:35:12.011 "state": "online", 00:35:12.011 "raid_level": "raid5f", 00:35:12.011 "superblock": false, 00:35:12.011 "num_base_bdevs": 4, 00:35:12.011 "num_base_bdevs_discovered": 4, 00:35:12.011 "num_base_bdevs_operational": 4, 00:35:12.011 "process": { 00:35:12.011 "type": "rebuild", 00:35:12.011 "target": "spare", 00:35:12.011 "progress": { 00:35:12.011 "blocks": 21120, 00:35:12.011 "percent": 10 00:35:12.011 } 00:35:12.011 }, 00:35:12.011 "base_bdevs_list": [ 00:35:12.011 { 00:35:12.011 "name": "spare", 00:35:12.011 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:12.011 "is_configured": true, 00:35:12.011 "data_offset": 0, 00:35:12.011 "data_size": 65536 00:35:12.011 }, 00:35:12.011 { 00:35:12.011 "name": "BaseBdev2", 00:35:12.011 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:12.011 "is_configured": true, 00:35:12.011 "data_offset": 0, 00:35:12.011 "data_size": 65536 00:35:12.011 }, 00:35:12.011 { 00:35:12.011 "name": "BaseBdev3", 00:35:12.011 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:12.011 "is_configured": true, 00:35:12.011 "data_offset": 0, 00:35:12.011 "data_size": 65536 00:35:12.011 }, 00:35:12.011 { 00:35:12.011 "name": "BaseBdev4", 00:35:12.011 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:12.011 "is_configured": true, 00:35:12.011 "data_offset": 0, 00:35:12.011 "data_size": 65536 00:35:12.011 } 00:35:12.011 ] 00:35:12.011 }' 00:35:12.011 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:12.011 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:12.011 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:12.270 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:12.270 11:46:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:12.530 [2024-07-13 11:46:47.048934] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:12.530 [2024-07-13 11:46:47.103899] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:12.530 [2024-07-13 11:46:47.103983] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:12.530 [2024-07-13 11:46:47.104005] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:12.530 [2024-07-13 11:46:47.104013] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:12.530 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:12.789 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:12.789 "name": "raid_bdev1", 00:35:12.789 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:12.789 "strip_size_kb": 64, 00:35:12.789 "state": "online", 00:35:12.789 "raid_level": "raid5f", 00:35:12.789 "superblock": false, 00:35:12.789 "num_base_bdevs": 4, 00:35:12.789 "num_base_bdevs_discovered": 3, 00:35:12.789 "num_base_bdevs_operational": 3, 00:35:12.789 "base_bdevs_list": [ 00:35:12.789 { 00:35:12.789 "name": null, 00:35:12.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:12.789 "is_configured": false, 00:35:12.789 "data_offset": 0, 00:35:12.789 "data_size": 65536 00:35:12.789 }, 00:35:12.789 { 00:35:12.789 "name": "BaseBdev2", 00:35:12.789 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:12.789 "is_configured": true, 00:35:12.789 "data_offset": 0, 00:35:12.789 "data_size": 65536 00:35:12.789 }, 00:35:12.789 { 00:35:12.789 "name": "BaseBdev3", 00:35:12.789 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:12.789 "is_configured": true, 00:35:12.789 "data_offset": 0, 00:35:12.789 "data_size": 65536 00:35:12.789 }, 00:35:12.789 { 00:35:12.789 "name": "BaseBdev4", 00:35:12.789 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:12.789 "is_configured": true, 00:35:12.789 "data_offset": 0, 00:35:12.789 "data_size": 65536 00:35:12.789 } 00:35:12.789 ] 00:35:12.789 }' 00:35:12.789 11:46:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:12.789 11:46:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:13.356 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:13.356 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:13.356 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:13.356 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:13.356 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:13.356 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:13.356 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:13.614 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:13.614 "name": "raid_bdev1", 00:35:13.614 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:13.614 "strip_size_kb": 64, 00:35:13.614 "state": "online", 00:35:13.614 "raid_level": "raid5f", 00:35:13.614 "superblock": false, 00:35:13.614 "num_base_bdevs": 4, 00:35:13.614 "num_base_bdevs_discovered": 3, 00:35:13.614 "num_base_bdevs_operational": 3, 00:35:13.614 "base_bdevs_list": [ 00:35:13.614 { 00:35:13.614 "name": null, 00:35:13.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:13.614 "is_configured": false, 00:35:13.614 "data_offset": 0, 00:35:13.614 "data_size": 65536 00:35:13.614 }, 00:35:13.614 { 00:35:13.614 "name": "BaseBdev2", 00:35:13.614 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:13.614 "is_configured": true, 00:35:13.614 "data_offset": 0, 00:35:13.614 "data_size": 65536 00:35:13.614 }, 00:35:13.614 { 00:35:13.614 "name": "BaseBdev3", 00:35:13.614 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:13.614 "is_configured": true, 00:35:13.614 "data_offset": 0, 00:35:13.614 "data_size": 65536 00:35:13.614 }, 00:35:13.614 { 00:35:13.614 "name": "BaseBdev4", 00:35:13.614 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:13.614 "is_configured": true, 00:35:13.614 "data_offset": 0, 00:35:13.614 "data_size": 65536 00:35:13.614 } 00:35:13.614 ] 00:35:13.614 }' 00:35:13.614 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:13.614 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:13.614 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:13.614 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:13.614 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:13.873 [2024-07-13 11:46:48.531322] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:13.873 [2024-07-13 11:46:48.541054] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d970 00:35:13.873 [2024-07-13 11:46:48.548019] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:13.873 11:46:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:14.806 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:14.806 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:14.806 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:14.806 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:14.806 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:14.806 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:14.806 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.065 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:15.065 "name": "raid_bdev1", 00:35:15.065 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:15.065 "strip_size_kb": 64, 00:35:15.065 "state": "online", 00:35:15.065 "raid_level": "raid5f", 00:35:15.065 "superblock": false, 00:35:15.065 "num_base_bdevs": 4, 00:35:15.065 "num_base_bdevs_discovered": 4, 00:35:15.065 "num_base_bdevs_operational": 4, 00:35:15.065 "process": { 00:35:15.065 "type": "rebuild", 00:35:15.065 "target": "spare", 00:35:15.065 "progress": { 00:35:15.065 "blocks": 21120, 00:35:15.065 "percent": 10 00:35:15.065 } 00:35:15.065 }, 00:35:15.065 "base_bdevs_list": [ 00:35:15.065 { 00:35:15.065 "name": "spare", 00:35:15.065 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:15.065 "is_configured": true, 00:35:15.065 "data_offset": 0, 00:35:15.065 "data_size": 65536 00:35:15.065 }, 00:35:15.065 { 00:35:15.065 "name": "BaseBdev2", 00:35:15.065 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:15.065 "is_configured": true, 00:35:15.065 "data_offset": 0, 00:35:15.065 "data_size": 65536 00:35:15.065 }, 00:35:15.065 { 00:35:15.065 "name": "BaseBdev3", 00:35:15.065 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:15.065 "is_configured": true, 00:35:15.065 "data_offset": 0, 00:35:15.065 "data_size": 65536 00:35:15.065 }, 00:35:15.065 { 00:35:15.065 "name": "BaseBdev4", 00:35:15.065 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:15.065 "is_configured": true, 00:35:15.065 "data_offset": 0, 00:35:15.065 "data_size": 65536 00:35:15.065 } 00:35:15.065 ] 00:35:15.065 }' 00:35:15.065 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:15.065 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:15.065 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1260 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.323 11:46:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.581 11:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:15.581 "name": "raid_bdev1", 00:35:15.581 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:15.581 "strip_size_kb": 64, 00:35:15.581 "state": "online", 00:35:15.581 "raid_level": "raid5f", 00:35:15.581 "superblock": false, 00:35:15.581 "num_base_bdevs": 4, 00:35:15.581 "num_base_bdevs_discovered": 4, 00:35:15.581 "num_base_bdevs_operational": 4, 00:35:15.581 "process": { 00:35:15.581 "type": "rebuild", 00:35:15.581 "target": "spare", 00:35:15.581 "progress": { 00:35:15.581 "blocks": 28800, 00:35:15.581 "percent": 14 00:35:15.581 } 00:35:15.581 }, 00:35:15.581 "base_bdevs_list": [ 00:35:15.581 { 00:35:15.581 "name": "spare", 00:35:15.581 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:15.581 "is_configured": true, 00:35:15.581 "data_offset": 0, 00:35:15.581 "data_size": 65536 00:35:15.581 }, 00:35:15.581 { 00:35:15.581 "name": "BaseBdev2", 00:35:15.581 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:15.581 "is_configured": true, 00:35:15.581 "data_offset": 0, 00:35:15.581 "data_size": 65536 00:35:15.581 }, 00:35:15.581 { 00:35:15.581 "name": "BaseBdev3", 00:35:15.581 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:15.581 "is_configured": true, 00:35:15.581 "data_offset": 0, 00:35:15.581 "data_size": 65536 00:35:15.581 }, 00:35:15.581 { 00:35:15.581 "name": "BaseBdev4", 00:35:15.581 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:15.581 "is_configured": true, 00:35:15.581 "data_offset": 0, 00:35:15.581 "data_size": 65536 00:35:15.581 } 00:35:15.581 ] 00:35:15.581 }' 00:35:15.581 11:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:15.581 11:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:15.581 11:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:15.581 11:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:15.581 11:46:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:16.514 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:16.514 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:16.514 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:16.514 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:16.514 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:16.514 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:16.514 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:16.514 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:16.772 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:16.772 "name": "raid_bdev1", 00:35:16.772 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:16.772 "strip_size_kb": 64, 00:35:16.772 "state": "online", 00:35:16.772 "raid_level": "raid5f", 00:35:16.772 "superblock": false, 00:35:16.772 "num_base_bdevs": 4, 00:35:16.772 "num_base_bdevs_discovered": 4, 00:35:16.772 "num_base_bdevs_operational": 4, 00:35:16.772 "process": { 00:35:16.772 "type": "rebuild", 00:35:16.772 "target": "spare", 00:35:16.772 "progress": { 00:35:16.772 "blocks": 55680, 00:35:16.772 "percent": 28 00:35:16.772 } 00:35:16.772 }, 00:35:16.772 "base_bdevs_list": [ 00:35:16.772 { 00:35:16.772 "name": "spare", 00:35:16.772 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:16.772 "is_configured": true, 00:35:16.772 "data_offset": 0, 00:35:16.772 "data_size": 65536 00:35:16.772 }, 00:35:16.772 { 00:35:16.772 "name": "BaseBdev2", 00:35:16.772 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:16.772 "is_configured": true, 00:35:16.772 "data_offset": 0, 00:35:16.772 "data_size": 65536 00:35:16.772 }, 00:35:16.772 { 00:35:16.772 "name": "BaseBdev3", 00:35:16.772 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:16.772 "is_configured": true, 00:35:16.772 "data_offset": 0, 00:35:16.772 "data_size": 65536 00:35:16.772 }, 00:35:16.772 { 00:35:16.772 "name": "BaseBdev4", 00:35:16.772 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:16.772 "is_configured": true, 00:35:16.772 "data_offset": 0, 00:35:16.772 "data_size": 65536 00:35:16.772 } 00:35:16.772 ] 00:35:16.772 }' 00:35:16.772 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:17.030 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:17.030 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:17.030 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:17.030 11:46:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:17.964 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:17.964 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:17.964 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:17.964 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:17.964 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:17.964 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:17.964 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:17.964 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:18.238 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:18.238 "name": "raid_bdev1", 00:35:18.238 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:18.239 "strip_size_kb": 64, 00:35:18.239 "state": "online", 00:35:18.239 "raid_level": "raid5f", 00:35:18.239 "superblock": false, 00:35:18.239 "num_base_bdevs": 4, 00:35:18.239 "num_base_bdevs_discovered": 4, 00:35:18.239 "num_base_bdevs_operational": 4, 00:35:18.239 "process": { 00:35:18.239 "type": "rebuild", 00:35:18.239 "target": "spare", 00:35:18.239 "progress": { 00:35:18.239 "blocks": 80640, 00:35:18.239 "percent": 41 00:35:18.239 } 00:35:18.239 }, 00:35:18.239 "base_bdevs_list": [ 00:35:18.239 { 00:35:18.239 "name": "spare", 00:35:18.239 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:18.239 "is_configured": true, 00:35:18.239 "data_offset": 0, 00:35:18.239 "data_size": 65536 00:35:18.239 }, 00:35:18.239 { 00:35:18.239 "name": "BaseBdev2", 00:35:18.239 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:18.239 "is_configured": true, 00:35:18.239 "data_offset": 0, 00:35:18.239 "data_size": 65536 00:35:18.239 }, 00:35:18.239 { 00:35:18.239 "name": "BaseBdev3", 00:35:18.239 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:18.239 "is_configured": true, 00:35:18.239 "data_offset": 0, 00:35:18.239 "data_size": 65536 00:35:18.239 }, 00:35:18.239 { 00:35:18.239 "name": "BaseBdev4", 00:35:18.239 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:18.239 "is_configured": true, 00:35:18.239 "data_offset": 0, 00:35:18.239 "data_size": 65536 00:35:18.239 } 00:35:18.239 ] 00:35:18.239 }' 00:35:18.239 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:18.239 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:18.239 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:18.239 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:18.239 11:46:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:19.615 11:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:19.615 11:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:19.615 11:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:19.615 11:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:19.615 11:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:19.615 11:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:19.615 11:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:19.615 11:46:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:19.615 11:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:19.615 "name": "raid_bdev1", 00:35:19.615 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:19.615 "strip_size_kb": 64, 00:35:19.615 "state": "online", 00:35:19.615 "raid_level": "raid5f", 00:35:19.615 "superblock": false, 00:35:19.615 "num_base_bdevs": 4, 00:35:19.615 "num_base_bdevs_discovered": 4, 00:35:19.615 "num_base_bdevs_operational": 4, 00:35:19.615 "process": { 00:35:19.615 "type": "rebuild", 00:35:19.615 "target": "spare", 00:35:19.615 "progress": { 00:35:19.615 "blocks": 107520, 00:35:19.615 "percent": 54 00:35:19.615 } 00:35:19.615 }, 00:35:19.615 "base_bdevs_list": [ 00:35:19.615 { 00:35:19.615 "name": "spare", 00:35:19.615 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:19.615 "is_configured": true, 00:35:19.615 "data_offset": 0, 00:35:19.615 "data_size": 65536 00:35:19.615 }, 00:35:19.615 { 00:35:19.615 "name": "BaseBdev2", 00:35:19.615 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:19.615 "is_configured": true, 00:35:19.615 "data_offset": 0, 00:35:19.615 "data_size": 65536 00:35:19.615 }, 00:35:19.615 { 00:35:19.615 "name": "BaseBdev3", 00:35:19.615 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:19.615 "is_configured": true, 00:35:19.615 "data_offset": 0, 00:35:19.615 "data_size": 65536 00:35:19.615 }, 00:35:19.615 { 00:35:19.615 "name": "BaseBdev4", 00:35:19.615 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:19.615 "is_configured": true, 00:35:19.615 "data_offset": 0, 00:35:19.615 "data_size": 65536 00:35:19.615 } 00:35:19.615 ] 00:35:19.615 }' 00:35:19.615 11:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:19.615 11:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:19.615 11:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:19.615 11:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:19.615 11:46:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:20.991 "name": "raid_bdev1", 00:35:20.991 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:20.991 "strip_size_kb": 64, 00:35:20.991 "state": "online", 00:35:20.991 "raid_level": "raid5f", 00:35:20.991 "superblock": false, 00:35:20.991 "num_base_bdevs": 4, 00:35:20.991 "num_base_bdevs_discovered": 4, 00:35:20.991 "num_base_bdevs_operational": 4, 00:35:20.991 "process": { 00:35:20.991 "type": "rebuild", 00:35:20.991 "target": "spare", 00:35:20.991 "progress": { 00:35:20.991 "blocks": 132480, 00:35:20.991 "percent": 67 00:35:20.991 } 00:35:20.991 }, 00:35:20.991 "base_bdevs_list": [ 00:35:20.991 { 00:35:20.991 "name": "spare", 00:35:20.991 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:20.991 "is_configured": true, 00:35:20.991 "data_offset": 0, 00:35:20.991 "data_size": 65536 00:35:20.991 }, 00:35:20.991 { 00:35:20.991 "name": "BaseBdev2", 00:35:20.991 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:20.991 "is_configured": true, 00:35:20.991 "data_offset": 0, 00:35:20.991 "data_size": 65536 00:35:20.991 }, 00:35:20.991 { 00:35:20.991 "name": "BaseBdev3", 00:35:20.991 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:20.991 "is_configured": true, 00:35:20.991 "data_offset": 0, 00:35:20.991 "data_size": 65536 00:35:20.991 }, 00:35:20.991 { 00:35:20.991 "name": "BaseBdev4", 00:35:20.991 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:20.991 "is_configured": true, 00:35:20.991 "data_offset": 0, 00:35:20.991 "data_size": 65536 00:35:20.991 } 00:35:20.991 ] 00:35:20.991 }' 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:20.991 11:46:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:21.926 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:21.926 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:21.926 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:21.926 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:21.926 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:21.926 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:21.926 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:21.926 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.185 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:22.185 "name": "raid_bdev1", 00:35:22.185 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:22.185 "strip_size_kb": 64, 00:35:22.185 "state": "online", 00:35:22.185 "raid_level": "raid5f", 00:35:22.185 "superblock": false, 00:35:22.185 "num_base_bdevs": 4, 00:35:22.185 "num_base_bdevs_discovered": 4, 00:35:22.185 "num_base_bdevs_operational": 4, 00:35:22.185 "process": { 00:35:22.185 "type": "rebuild", 00:35:22.185 "target": "spare", 00:35:22.185 "progress": { 00:35:22.185 "blocks": 157440, 00:35:22.185 "percent": 80 00:35:22.185 } 00:35:22.185 }, 00:35:22.185 "base_bdevs_list": [ 00:35:22.185 { 00:35:22.185 "name": "spare", 00:35:22.185 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:22.185 "is_configured": true, 00:35:22.185 "data_offset": 0, 00:35:22.185 "data_size": 65536 00:35:22.185 }, 00:35:22.185 { 00:35:22.185 "name": "BaseBdev2", 00:35:22.185 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:22.185 "is_configured": true, 00:35:22.185 "data_offset": 0, 00:35:22.185 "data_size": 65536 00:35:22.185 }, 00:35:22.185 { 00:35:22.185 "name": "BaseBdev3", 00:35:22.185 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:22.185 "is_configured": true, 00:35:22.185 "data_offset": 0, 00:35:22.185 "data_size": 65536 00:35:22.185 }, 00:35:22.185 { 00:35:22.185 "name": "BaseBdev4", 00:35:22.185 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:22.185 "is_configured": true, 00:35:22.185 "data_offset": 0, 00:35:22.185 "data_size": 65536 00:35:22.185 } 00:35:22.185 ] 00:35:22.185 }' 00:35:22.185 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:22.444 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:22.444 11:46:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:22.444 11:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:22.444 11:46:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:23.378 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:23.378 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:23.378 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:23.378 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:23.378 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:23.378 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:23.378 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:23.378 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.636 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:23.636 "name": "raid_bdev1", 00:35:23.636 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:23.636 "strip_size_kb": 64, 00:35:23.636 "state": "online", 00:35:23.636 "raid_level": "raid5f", 00:35:23.636 "superblock": false, 00:35:23.636 "num_base_bdevs": 4, 00:35:23.636 "num_base_bdevs_discovered": 4, 00:35:23.636 "num_base_bdevs_operational": 4, 00:35:23.636 "process": { 00:35:23.636 "type": "rebuild", 00:35:23.636 "target": "spare", 00:35:23.636 "progress": { 00:35:23.636 "blocks": 184320, 00:35:23.636 "percent": 93 00:35:23.636 } 00:35:23.636 }, 00:35:23.636 "base_bdevs_list": [ 00:35:23.636 { 00:35:23.636 "name": "spare", 00:35:23.636 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:23.636 "is_configured": true, 00:35:23.636 "data_offset": 0, 00:35:23.636 "data_size": 65536 00:35:23.636 }, 00:35:23.636 { 00:35:23.636 "name": "BaseBdev2", 00:35:23.636 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:23.636 "is_configured": true, 00:35:23.636 "data_offset": 0, 00:35:23.636 "data_size": 65536 00:35:23.636 }, 00:35:23.636 { 00:35:23.636 "name": "BaseBdev3", 00:35:23.636 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:23.636 "is_configured": true, 00:35:23.636 "data_offset": 0, 00:35:23.636 "data_size": 65536 00:35:23.636 }, 00:35:23.636 { 00:35:23.636 "name": "BaseBdev4", 00:35:23.636 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:23.636 "is_configured": true, 00:35:23.636 "data_offset": 0, 00:35:23.636 "data_size": 65536 00:35:23.636 } 00:35:23.636 ] 00:35:23.636 }' 00:35:23.636 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:23.636 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:23.636 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:23.636 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:23.636 11:46:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:24.202 [2024-07-13 11:46:58.922550] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:24.202 [2024-07-13 11:46:58.922618] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:24.202 [2024-07-13 11:46:58.922681] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:24.768 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:24.768 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:24.768 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:24.768 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:24.768 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:24.768 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:24.768 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:24.768 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.026 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:25.026 "name": "raid_bdev1", 00:35:25.026 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:25.026 "strip_size_kb": 64, 00:35:25.026 "state": "online", 00:35:25.026 "raid_level": "raid5f", 00:35:25.026 "superblock": false, 00:35:25.026 "num_base_bdevs": 4, 00:35:25.026 "num_base_bdevs_discovered": 4, 00:35:25.026 "num_base_bdevs_operational": 4, 00:35:25.026 "base_bdevs_list": [ 00:35:25.026 { 00:35:25.026 "name": "spare", 00:35:25.026 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:25.026 "is_configured": true, 00:35:25.026 "data_offset": 0, 00:35:25.026 "data_size": 65536 00:35:25.026 }, 00:35:25.026 { 00:35:25.026 "name": "BaseBdev2", 00:35:25.026 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:25.026 "is_configured": true, 00:35:25.026 "data_offset": 0, 00:35:25.026 "data_size": 65536 00:35:25.026 }, 00:35:25.026 { 00:35:25.026 "name": "BaseBdev3", 00:35:25.026 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:25.026 "is_configured": true, 00:35:25.026 "data_offset": 0, 00:35:25.026 "data_size": 65536 00:35:25.026 }, 00:35:25.026 { 00:35:25.026 "name": "BaseBdev4", 00:35:25.026 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:25.026 "is_configured": true, 00:35:25.026 "data_offset": 0, 00:35:25.026 "data_size": 65536 00:35:25.026 } 00:35:25.027 ] 00:35:25.027 }' 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:25.027 11:46:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.285 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:25.285 "name": "raid_bdev1", 00:35:25.285 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:25.285 "strip_size_kb": 64, 00:35:25.285 "state": "online", 00:35:25.285 "raid_level": "raid5f", 00:35:25.285 "superblock": false, 00:35:25.285 "num_base_bdevs": 4, 00:35:25.285 "num_base_bdevs_discovered": 4, 00:35:25.285 "num_base_bdevs_operational": 4, 00:35:25.285 "base_bdevs_list": [ 00:35:25.285 { 00:35:25.285 "name": "spare", 00:35:25.285 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:25.285 "is_configured": true, 00:35:25.285 "data_offset": 0, 00:35:25.285 "data_size": 65536 00:35:25.285 }, 00:35:25.285 { 00:35:25.285 "name": "BaseBdev2", 00:35:25.285 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:25.285 "is_configured": true, 00:35:25.285 "data_offset": 0, 00:35:25.285 "data_size": 65536 00:35:25.285 }, 00:35:25.285 { 00:35:25.285 "name": "BaseBdev3", 00:35:25.285 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:25.285 "is_configured": true, 00:35:25.285 "data_offset": 0, 00:35:25.285 "data_size": 65536 00:35:25.285 }, 00:35:25.285 { 00:35:25.285 "name": "BaseBdev4", 00:35:25.285 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:25.285 "is_configured": true, 00:35:25.285 "data_offset": 0, 00:35:25.285 "data_size": 65536 00:35:25.285 } 00:35:25.285 ] 00:35:25.285 }' 00:35:25.285 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:25.543 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.801 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:25.801 "name": "raid_bdev1", 00:35:25.801 "uuid": "952972ed-6dfc-4e52-8914-003855418d79", 00:35:25.801 "strip_size_kb": 64, 00:35:25.801 "state": "online", 00:35:25.801 "raid_level": "raid5f", 00:35:25.801 "superblock": false, 00:35:25.801 "num_base_bdevs": 4, 00:35:25.801 "num_base_bdevs_discovered": 4, 00:35:25.801 "num_base_bdevs_operational": 4, 00:35:25.801 "base_bdevs_list": [ 00:35:25.801 { 00:35:25.801 "name": "spare", 00:35:25.801 "uuid": "1dcee845-da63-56ba-af0b-dc5086de797f", 00:35:25.801 "is_configured": true, 00:35:25.801 "data_offset": 0, 00:35:25.801 "data_size": 65536 00:35:25.801 }, 00:35:25.801 { 00:35:25.801 "name": "BaseBdev2", 00:35:25.801 "uuid": "4d9e8518-1ef0-5119-ad56-a81d9195b491", 00:35:25.801 "is_configured": true, 00:35:25.801 "data_offset": 0, 00:35:25.801 "data_size": 65536 00:35:25.801 }, 00:35:25.801 { 00:35:25.801 "name": "BaseBdev3", 00:35:25.801 "uuid": "4fd337fe-e59b-5454-a610-b917b50c48f3", 00:35:25.801 "is_configured": true, 00:35:25.801 "data_offset": 0, 00:35:25.801 "data_size": 65536 00:35:25.801 }, 00:35:25.801 { 00:35:25.801 "name": "BaseBdev4", 00:35:25.801 "uuid": "a5184fd2-29ca-5de2-b9f2-04ef57246825", 00:35:25.801 "is_configured": true, 00:35:25.801 "data_offset": 0, 00:35:25.801 "data_size": 65536 00:35:25.801 } 00:35:25.801 ] 00:35:25.801 }' 00:35:25.801 11:47:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:25.801 11:47:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:26.368 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:26.625 [2024-07-13 11:47:01.306949] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:26.625 [2024-07-13 11:47:01.306980] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:26.625 [2024-07-13 11:47:01.307056] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:26.625 [2024-07-13 11:47:01.307156] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:26.625 [2024-07-13 11:47:01.307171] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:35:26.625 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.625 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:26.883 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:27.142 /dev/nbd0 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:27.142 1+0 records in 00:35:27.142 1+0 records out 00:35:27.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413519 s, 9.9 MB/s 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:27.142 11:47:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:27.401 /dev/nbd1 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # break 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:27.401 1+0 records in 00:35:27.401 1+0 records out 00:35:27.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522905 s, 7.8 MB/s 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:27.401 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:35:27.659 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:27.659 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:27.659 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:27.659 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:27.659 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:27.659 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:27.659 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:27.917 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 158540 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 158540 ']' 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 158540 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 158540 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 158540' 00:35:28.176 killing process with pid 158540 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@967 -- # kill 158540 00:35:28.176 Received shutdown signal, test time was about 60.000000 seconds 00:35:28.176 00:35:28.176 Latency(us) 00:35:28.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.176 =================================================================================================================== 00:35:28.176 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:28.176 [2024-07-13 11:47:02.870633] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:28.176 11:47:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # wait 158540 00:35:28.742 [2024-07-13 11:47:03.202318] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:29.677 ************************************ 00:35:29.677 END TEST raid5f_rebuild_test 00:35:29.677 ************************************ 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:35:29.677 00:35:29.677 real 0m25.892s 00:35:29.677 user 0m38.055s 00:35:29.677 sys 0m2.744s 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.677 11:47:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:35:29.677 11:47:04 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:35:29.677 11:47:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:35:29.677 11:47:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:29.677 11:47:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:29.677 ************************************ 00:35:29.677 START TEST raid5f_rebuild_test_sb 00:35:29.677 ************************************ 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid5f 4 true false true 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:29.677 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=159215 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 159215 /var/tmp/spdk-raid.sock 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 159215 ']' 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:29.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:29.678 11:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:29.678 [2024-07-13 11:47:04.350407] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:35:29.678 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:29.678 Zero copy mechanism will not be used. 00:35:29.678 [2024-07-13 11:47:04.350610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159215 ] 00:35:29.936 [2024-07-13 11:47:04.519118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.194 [2024-07-13 11:47:04.703482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.194 [2024-07-13 11:47:04.888727] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:30.761 11:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:30.761 11:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:35:30.761 11:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:30.761 11:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:30.761 BaseBdev1_malloc 00:35:30.761 11:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:31.019 [2024-07-13 11:47:05.664355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:31.019 [2024-07-13 11:47:05.664461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:31.019 [2024-07-13 11:47:05.664503] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:35:31.019 [2024-07-13 11:47:05.664525] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:31.019 [2024-07-13 11:47:05.666755] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:31.019 [2024-07-13 11:47:05.666800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:31.019 BaseBdev1 00:35:31.019 11:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:31.019 11:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:31.276 BaseBdev2_malloc 00:35:31.276 11:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:31.535 [2024-07-13 11:47:06.070493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:31.535 [2024-07-13 11:47:06.070581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:31.535 [2024-07-13 11:47:06.070617] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:35:31.535 [2024-07-13 11:47:06.070650] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:31.535 [2024-07-13 11:47:06.072805] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:31.535 [2024-07-13 11:47:06.072852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:31.535 BaseBdev2 00:35:31.535 11:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:31.535 11:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:31.792 BaseBdev3_malloc 00:35:31.792 11:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:32.050 [2024-07-13 11:47:06.580073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:32.050 [2024-07-13 11:47:06.580160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:32.050 [2024-07-13 11:47:06.580194] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:35:32.050 [2024-07-13 11:47:06.580218] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:32.050 [2024-07-13 11:47:06.581993] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:32.050 [2024-07-13 11:47:06.582040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:32.050 BaseBdev3 00:35:32.050 11:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:32.050 11:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:32.307 BaseBdev4_malloc 00:35:32.307 11:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:32.307 [2024-07-13 11:47:07.028460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:32.307 [2024-07-13 11:47:07.028535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:32.307 [2024-07-13 11:47:07.028568] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:32.307 [2024-07-13 11:47:07.028594] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:32.307 [2024-07-13 11:47:07.030779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:32.307 [2024-07-13 11:47:07.030831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:32.307 BaseBdev4 00:35:32.307 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:35:32.572 spare_malloc 00:35:32.572 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:32.830 spare_delay 00:35:32.830 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:33.087 [2024-07-13 11:47:07.609748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:33.087 [2024-07-13 11:47:07.609830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:33.087 [2024-07-13 11:47:07.609862] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:35:33.087 [2024-07-13 11:47:07.609892] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:33.087 [2024-07-13 11:47:07.612222] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:33.087 [2024-07-13 11:47:07.612273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:33.087 spare 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:35:33.087 [2024-07-13 11:47:07.785839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:33.087 [2024-07-13 11:47:07.787733] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:33.087 [2024-07-13 11:47:07.787818] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:33.087 [2024-07-13 11:47:07.787874] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:33.087 [2024-07-13 11:47:07.788082] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:35:33.087 [2024-07-13 11:47:07.788105] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:33.087 [2024-07-13 11:47:07.788216] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:35:33.087 [2024-07-13 11:47:07.793614] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:35:33.087 [2024-07-13 11:47:07.793637] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:35:33.087 [2024-07-13 11:47:07.793792] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.087 11:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.344 11:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:33.344 "name": "raid_bdev1", 00:35:33.344 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:33.344 "strip_size_kb": 64, 00:35:33.344 "state": "online", 00:35:33.344 "raid_level": "raid5f", 00:35:33.344 "superblock": true, 00:35:33.344 "num_base_bdevs": 4, 00:35:33.344 "num_base_bdevs_discovered": 4, 00:35:33.344 "num_base_bdevs_operational": 4, 00:35:33.344 "base_bdevs_list": [ 00:35:33.344 { 00:35:33.344 "name": "BaseBdev1", 00:35:33.344 "uuid": "6576d979-c6cb-5e71-a3a8-ec270e3a1a2b", 00:35:33.344 "is_configured": true, 00:35:33.344 "data_offset": 2048, 00:35:33.344 "data_size": 63488 00:35:33.344 }, 00:35:33.344 { 00:35:33.344 "name": "BaseBdev2", 00:35:33.344 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:33.344 "is_configured": true, 00:35:33.344 "data_offset": 2048, 00:35:33.344 "data_size": 63488 00:35:33.344 }, 00:35:33.344 { 00:35:33.344 "name": "BaseBdev3", 00:35:33.344 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:33.344 "is_configured": true, 00:35:33.344 "data_offset": 2048, 00:35:33.344 "data_size": 63488 00:35:33.344 }, 00:35:33.344 { 00:35:33.344 "name": "BaseBdev4", 00:35:33.344 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:33.344 "is_configured": true, 00:35:33.344 "data_offset": 2048, 00:35:33.344 "data_size": 63488 00:35:33.344 } 00:35:33.344 ] 00:35:33.344 }' 00:35:33.344 11:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:33.344 11:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:33.909 11:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:33.909 11:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:34.168 [2024-07-13 11:47:08.836494] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:34.168 11:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=190464 00:35:34.168 11:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:34.168 11:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:34.426 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:34.685 [2024-07-13 11:47:09.288488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:34.685 /dev/nbd0 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:34.685 1+0 records in 00:35:34.685 1+0 records out 00:35:34.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126471 s, 3.2 MB/s 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:35:34.685 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:35:34.686 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 192 00:35:34.686 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:35:35.254 496+0 records in 00:35:35.254 496+0 records out 00:35:35.254 97517568 bytes (98 MB, 93 MiB) copied, 0.47638 s, 205 MB/s 00:35:35.254 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:35.254 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:35.254 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:35.254 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:35.254 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:35.254 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:35.254 11:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:35.512 [2024-07-13 11:47:10.038330] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:35.512 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:35.771 [2024-07-13 11:47:10.397338] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.771 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.030 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:36.030 "name": "raid_bdev1", 00:35:36.030 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:36.030 "strip_size_kb": 64, 00:35:36.030 "state": "online", 00:35:36.030 "raid_level": "raid5f", 00:35:36.030 "superblock": true, 00:35:36.030 "num_base_bdevs": 4, 00:35:36.030 "num_base_bdevs_discovered": 3, 00:35:36.030 "num_base_bdevs_operational": 3, 00:35:36.030 "base_bdevs_list": [ 00:35:36.030 { 00:35:36.030 "name": null, 00:35:36.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:36.030 "is_configured": false, 00:35:36.030 "data_offset": 2048, 00:35:36.030 "data_size": 63488 00:35:36.030 }, 00:35:36.030 { 00:35:36.030 "name": "BaseBdev2", 00:35:36.030 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:36.030 "is_configured": true, 00:35:36.030 "data_offset": 2048, 00:35:36.030 "data_size": 63488 00:35:36.030 }, 00:35:36.030 { 00:35:36.030 "name": "BaseBdev3", 00:35:36.030 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:36.030 "is_configured": true, 00:35:36.030 "data_offset": 2048, 00:35:36.030 "data_size": 63488 00:35:36.030 }, 00:35:36.030 { 00:35:36.031 "name": "BaseBdev4", 00:35:36.031 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:36.031 "is_configured": true, 00:35:36.031 "data_offset": 2048, 00:35:36.031 "data_size": 63488 00:35:36.031 } 00:35:36.031 ] 00:35:36.031 }' 00:35:36.031 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:36.031 11:47:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:36.599 11:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:36.858 [2024-07-13 11:47:11.501485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:36.858 [2024-07-13 11:47:11.511871] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cad0 00:35:36.858 [2024-07-13 11:47:11.518811] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:36.858 11:47:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:35:37.793 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:37.793 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:37.793 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:37.793 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:37.793 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:37.793 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:37.793 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:38.052 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:38.052 "name": "raid_bdev1", 00:35:38.052 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:38.052 "strip_size_kb": 64, 00:35:38.052 "state": "online", 00:35:38.052 "raid_level": "raid5f", 00:35:38.052 "superblock": true, 00:35:38.052 "num_base_bdevs": 4, 00:35:38.052 "num_base_bdevs_discovered": 4, 00:35:38.052 "num_base_bdevs_operational": 4, 00:35:38.052 "process": { 00:35:38.052 "type": "rebuild", 00:35:38.052 "target": "spare", 00:35:38.052 "progress": { 00:35:38.052 "blocks": 23040, 00:35:38.052 "percent": 12 00:35:38.052 } 00:35:38.052 }, 00:35:38.052 "base_bdevs_list": [ 00:35:38.052 { 00:35:38.052 "name": "spare", 00:35:38.052 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:38.052 "is_configured": true, 00:35:38.052 "data_offset": 2048, 00:35:38.052 "data_size": 63488 00:35:38.052 }, 00:35:38.052 { 00:35:38.052 "name": "BaseBdev2", 00:35:38.052 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:38.052 "is_configured": true, 00:35:38.052 "data_offset": 2048, 00:35:38.052 "data_size": 63488 00:35:38.052 }, 00:35:38.052 { 00:35:38.052 "name": "BaseBdev3", 00:35:38.052 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:38.052 "is_configured": true, 00:35:38.052 "data_offset": 2048, 00:35:38.052 "data_size": 63488 00:35:38.052 }, 00:35:38.052 { 00:35:38.052 "name": "BaseBdev4", 00:35:38.052 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:38.052 "is_configured": true, 00:35:38.052 "data_offset": 2048, 00:35:38.052 "data_size": 63488 00:35:38.052 } 00:35:38.052 ] 00:35:38.052 }' 00:35:38.052 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:38.311 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:38.311 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:38.311 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:38.311 11:47:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:38.569 [2024-07-13 11:47:13.095797] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:38.569 [2024-07-13 11:47:13.130704] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:38.569 [2024-07-13 11:47:13.130781] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:38.569 [2024-07-13 11:47:13.130803] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:38.569 [2024-07-13 11:47:13.130811] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:38.569 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:38.827 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:38.827 "name": "raid_bdev1", 00:35:38.827 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:38.827 "strip_size_kb": 64, 00:35:38.827 "state": "online", 00:35:38.827 "raid_level": "raid5f", 00:35:38.827 "superblock": true, 00:35:38.827 "num_base_bdevs": 4, 00:35:38.827 "num_base_bdevs_discovered": 3, 00:35:38.827 "num_base_bdevs_operational": 3, 00:35:38.827 "base_bdevs_list": [ 00:35:38.827 { 00:35:38.827 "name": null, 00:35:38.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:38.827 "is_configured": false, 00:35:38.827 "data_offset": 2048, 00:35:38.827 "data_size": 63488 00:35:38.827 }, 00:35:38.827 { 00:35:38.827 "name": "BaseBdev2", 00:35:38.827 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:38.827 "is_configured": true, 00:35:38.827 "data_offset": 2048, 00:35:38.827 "data_size": 63488 00:35:38.827 }, 00:35:38.827 { 00:35:38.827 "name": "BaseBdev3", 00:35:38.827 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:38.827 "is_configured": true, 00:35:38.827 "data_offset": 2048, 00:35:38.827 "data_size": 63488 00:35:38.827 }, 00:35:38.827 { 00:35:38.827 "name": "BaseBdev4", 00:35:38.827 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:38.827 "is_configured": true, 00:35:38.827 "data_offset": 2048, 00:35:38.827 "data_size": 63488 00:35:38.827 } 00:35:38.827 ] 00:35:38.827 }' 00:35:38.827 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:38.827 11:47:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:39.393 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:39.393 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:39.393 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:39.393 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:39.393 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:39.394 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:39.394 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.652 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:39.652 "name": "raid_bdev1", 00:35:39.652 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:39.652 "strip_size_kb": 64, 00:35:39.652 "state": "online", 00:35:39.652 "raid_level": "raid5f", 00:35:39.652 "superblock": true, 00:35:39.652 "num_base_bdevs": 4, 00:35:39.652 "num_base_bdevs_discovered": 3, 00:35:39.652 "num_base_bdevs_operational": 3, 00:35:39.652 "base_bdevs_list": [ 00:35:39.652 { 00:35:39.652 "name": null, 00:35:39.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.652 "is_configured": false, 00:35:39.652 "data_offset": 2048, 00:35:39.652 "data_size": 63488 00:35:39.652 }, 00:35:39.652 { 00:35:39.652 "name": "BaseBdev2", 00:35:39.652 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:39.652 "is_configured": true, 00:35:39.652 "data_offset": 2048, 00:35:39.652 "data_size": 63488 00:35:39.652 }, 00:35:39.652 { 00:35:39.652 "name": "BaseBdev3", 00:35:39.652 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:39.652 "is_configured": true, 00:35:39.652 "data_offset": 2048, 00:35:39.652 "data_size": 63488 00:35:39.652 }, 00:35:39.652 { 00:35:39.652 "name": "BaseBdev4", 00:35:39.652 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:39.652 "is_configured": true, 00:35:39.652 "data_offset": 2048, 00:35:39.652 "data_size": 63488 00:35:39.652 } 00:35:39.652 ] 00:35:39.652 }' 00:35:39.652 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:39.652 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:39.652 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:39.652 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:39.652 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:39.924 [2024-07-13 11:47:14.496580] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:39.924 [2024-07-13 11:47:14.505985] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cc70 00:35:39.924 [2024-07-13 11:47:14.512751] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:39.924 11:47:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:40.883 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:40.883 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:40.883 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:40.883 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:40.883 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:40.883 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:40.883 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:41.142 "name": "raid_bdev1", 00:35:41.142 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:41.142 "strip_size_kb": 64, 00:35:41.142 "state": "online", 00:35:41.142 "raid_level": "raid5f", 00:35:41.142 "superblock": true, 00:35:41.142 "num_base_bdevs": 4, 00:35:41.142 "num_base_bdevs_discovered": 4, 00:35:41.142 "num_base_bdevs_operational": 4, 00:35:41.142 "process": { 00:35:41.142 "type": "rebuild", 00:35:41.142 "target": "spare", 00:35:41.142 "progress": { 00:35:41.142 "blocks": 23040, 00:35:41.142 "percent": 12 00:35:41.142 } 00:35:41.142 }, 00:35:41.142 "base_bdevs_list": [ 00:35:41.142 { 00:35:41.142 "name": "spare", 00:35:41.142 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:41.142 "is_configured": true, 00:35:41.142 "data_offset": 2048, 00:35:41.142 "data_size": 63488 00:35:41.142 }, 00:35:41.142 { 00:35:41.142 "name": "BaseBdev2", 00:35:41.142 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:41.142 "is_configured": true, 00:35:41.142 "data_offset": 2048, 00:35:41.142 "data_size": 63488 00:35:41.142 }, 00:35:41.142 { 00:35:41.142 "name": "BaseBdev3", 00:35:41.142 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:41.142 "is_configured": true, 00:35:41.142 "data_offset": 2048, 00:35:41.142 "data_size": 63488 00:35:41.142 }, 00:35:41.142 { 00:35:41.142 "name": "BaseBdev4", 00:35:41.142 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:41.142 "is_configured": true, 00:35:41.142 "data_offset": 2048, 00:35:41.142 "data_size": 63488 00:35:41.142 } 00:35:41.142 ] 00:35:41.142 }' 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:35:41.142 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1286 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:41.142 11:47:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.400 11:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:41.400 "name": "raid_bdev1", 00:35:41.400 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:41.400 "strip_size_kb": 64, 00:35:41.400 "state": "online", 00:35:41.400 "raid_level": "raid5f", 00:35:41.400 "superblock": true, 00:35:41.400 "num_base_bdevs": 4, 00:35:41.400 "num_base_bdevs_discovered": 4, 00:35:41.400 "num_base_bdevs_operational": 4, 00:35:41.400 "process": { 00:35:41.400 "type": "rebuild", 00:35:41.400 "target": "spare", 00:35:41.400 "progress": { 00:35:41.400 "blocks": 28800, 00:35:41.400 "percent": 15 00:35:41.400 } 00:35:41.400 }, 00:35:41.400 "base_bdevs_list": [ 00:35:41.400 { 00:35:41.400 "name": "spare", 00:35:41.400 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:41.400 "is_configured": true, 00:35:41.400 "data_offset": 2048, 00:35:41.400 "data_size": 63488 00:35:41.400 }, 00:35:41.400 { 00:35:41.400 "name": "BaseBdev2", 00:35:41.400 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:41.400 "is_configured": true, 00:35:41.400 "data_offset": 2048, 00:35:41.400 "data_size": 63488 00:35:41.400 }, 00:35:41.400 { 00:35:41.400 "name": "BaseBdev3", 00:35:41.400 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:41.400 "is_configured": true, 00:35:41.400 "data_offset": 2048, 00:35:41.400 "data_size": 63488 00:35:41.400 }, 00:35:41.400 { 00:35:41.400 "name": "BaseBdev4", 00:35:41.400 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:41.400 "is_configured": true, 00:35:41.400 "data_offset": 2048, 00:35:41.400 "data_size": 63488 00:35:41.400 } 00:35:41.400 ] 00:35:41.400 }' 00:35:41.400 11:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:41.400 11:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:41.400 11:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:41.658 11:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:41.658 11:47:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:42.593 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:42.593 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:42.593 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:42.593 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:42.593 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:42.593 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:42.593 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:42.593 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:42.852 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:42.852 "name": "raid_bdev1", 00:35:42.852 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:42.852 "strip_size_kb": 64, 00:35:42.852 "state": "online", 00:35:42.852 "raid_level": "raid5f", 00:35:42.852 "superblock": true, 00:35:42.852 "num_base_bdevs": 4, 00:35:42.852 "num_base_bdevs_discovered": 4, 00:35:42.852 "num_base_bdevs_operational": 4, 00:35:42.852 "process": { 00:35:42.852 "type": "rebuild", 00:35:42.852 "target": "spare", 00:35:42.852 "progress": { 00:35:42.852 "blocks": 53760, 00:35:42.852 "percent": 28 00:35:42.852 } 00:35:42.852 }, 00:35:42.852 "base_bdevs_list": [ 00:35:42.852 { 00:35:42.852 "name": "spare", 00:35:42.852 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:42.852 "is_configured": true, 00:35:42.852 "data_offset": 2048, 00:35:42.852 "data_size": 63488 00:35:42.852 }, 00:35:42.852 { 00:35:42.852 "name": "BaseBdev2", 00:35:42.852 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:42.852 "is_configured": true, 00:35:42.852 "data_offset": 2048, 00:35:42.852 "data_size": 63488 00:35:42.852 }, 00:35:42.852 { 00:35:42.852 "name": "BaseBdev3", 00:35:42.852 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:42.852 "is_configured": true, 00:35:42.852 "data_offset": 2048, 00:35:42.852 "data_size": 63488 00:35:42.852 }, 00:35:42.852 { 00:35:42.852 "name": "BaseBdev4", 00:35:42.852 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:42.852 "is_configured": true, 00:35:42.852 "data_offset": 2048, 00:35:42.852 "data_size": 63488 00:35:42.852 } 00:35:42.852 ] 00:35:42.852 }' 00:35:42.852 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:42.852 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:42.852 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:42.852 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:42.852 11:47:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:43.788 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:43.788 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:43.788 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:43.788 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:43.788 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:43.788 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:43.788 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:43.788 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.047 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:44.047 "name": "raid_bdev1", 00:35:44.047 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:44.047 "strip_size_kb": 64, 00:35:44.047 "state": "online", 00:35:44.047 "raid_level": "raid5f", 00:35:44.047 "superblock": true, 00:35:44.047 "num_base_bdevs": 4, 00:35:44.047 "num_base_bdevs_discovered": 4, 00:35:44.047 "num_base_bdevs_operational": 4, 00:35:44.047 "process": { 00:35:44.047 "type": "rebuild", 00:35:44.047 "target": "spare", 00:35:44.047 "progress": { 00:35:44.047 "blocks": 78720, 00:35:44.047 "percent": 41 00:35:44.047 } 00:35:44.047 }, 00:35:44.047 "base_bdevs_list": [ 00:35:44.047 { 00:35:44.047 "name": "spare", 00:35:44.047 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:44.047 "is_configured": true, 00:35:44.047 "data_offset": 2048, 00:35:44.047 "data_size": 63488 00:35:44.047 }, 00:35:44.047 { 00:35:44.047 "name": "BaseBdev2", 00:35:44.047 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:44.047 "is_configured": true, 00:35:44.047 "data_offset": 2048, 00:35:44.047 "data_size": 63488 00:35:44.047 }, 00:35:44.047 { 00:35:44.047 "name": "BaseBdev3", 00:35:44.047 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:44.047 "is_configured": true, 00:35:44.047 "data_offset": 2048, 00:35:44.047 "data_size": 63488 00:35:44.047 }, 00:35:44.047 { 00:35:44.047 "name": "BaseBdev4", 00:35:44.047 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:44.047 "is_configured": true, 00:35:44.047 "data_offset": 2048, 00:35:44.047 "data_size": 63488 00:35:44.047 } 00:35:44.047 ] 00:35:44.047 }' 00:35:44.047 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:44.047 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:44.047 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:44.305 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:44.305 11:47:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:45.240 11:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:45.240 11:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:45.240 11:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:45.240 11:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:45.240 11:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:45.240 11:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:45.240 11:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:45.240 11:47:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:45.499 11:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:45.499 "name": "raid_bdev1", 00:35:45.499 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:45.499 "strip_size_kb": 64, 00:35:45.499 "state": "online", 00:35:45.499 "raid_level": "raid5f", 00:35:45.499 "superblock": true, 00:35:45.499 "num_base_bdevs": 4, 00:35:45.499 "num_base_bdevs_discovered": 4, 00:35:45.499 "num_base_bdevs_operational": 4, 00:35:45.499 "process": { 00:35:45.499 "type": "rebuild", 00:35:45.499 "target": "spare", 00:35:45.499 "progress": { 00:35:45.499 "blocks": 103680, 00:35:45.499 "percent": 54 00:35:45.499 } 00:35:45.499 }, 00:35:45.499 "base_bdevs_list": [ 00:35:45.499 { 00:35:45.499 "name": "spare", 00:35:45.499 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:45.499 "is_configured": true, 00:35:45.499 "data_offset": 2048, 00:35:45.499 "data_size": 63488 00:35:45.499 }, 00:35:45.499 { 00:35:45.499 "name": "BaseBdev2", 00:35:45.499 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:45.499 "is_configured": true, 00:35:45.499 "data_offset": 2048, 00:35:45.499 "data_size": 63488 00:35:45.499 }, 00:35:45.499 { 00:35:45.499 "name": "BaseBdev3", 00:35:45.499 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:45.499 "is_configured": true, 00:35:45.499 "data_offset": 2048, 00:35:45.499 "data_size": 63488 00:35:45.499 }, 00:35:45.499 { 00:35:45.499 "name": "BaseBdev4", 00:35:45.499 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:45.499 "is_configured": true, 00:35:45.499 "data_offset": 2048, 00:35:45.499 "data_size": 63488 00:35:45.499 } 00:35:45.499 ] 00:35:45.499 }' 00:35:45.500 11:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:45.500 11:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:45.500 11:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:45.500 11:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:45.500 11:47:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:46.434 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:46.434 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:46.434 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:46.434 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:46.434 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:46.434 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:46.434 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:46.434 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:46.692 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:46.692 "name": "raid_bdev1", 00:35:46.692 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:46.692 "strip_size_kb": 64, 00:35:46.693 "state": "online", 00:35:46.693 "raid_level": "raid5f", 00:35:46.693 "superblock": true, 00:35:46.693 "num_base_bdevs": 4, 00:35:46.693 "num_base_bdevs_discovered": 4, 00:35:46.693 "num_base_bdevs_operational": 4, 00:35:46.693 "process": { 00:35:46.693 "type": "rebuild", 00:35:46.693 "target": "spare", 00:35:46.693 "progress": { 00:35:46.693 "blocks": 130560, 00:35:46.693 "percent": 68 00:35:46.693 } 00:35:46.693 }, 00:35:46.693 "base_bdevs_list": [ 00:35:46.693 { 00:35:46.693 "name": "spare", 00:35:46.693 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:46.693 "is_configured": true, 00:35:46.693 "data_offset": 2048, 00:35:46.693 "data_size": 63488 00:35:46.693 }, 00:35:46.693 { 00:35:46.693 "name": "BaseBdev2", 00:35:46.693 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:46.693 "is_configured": true, 00:35:46.693 "data_offset": 2048, 00:35:46.693 "data_size": 63488 00:35:46.693 }, 00:35:46.693 { 00:35:46.693 "name": "BaseBdev3", 00:35:46.693 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:46.693 "is_configured": true, 00:35:46.693 "data_offset": 2048, 00:35:46.693 "data_size": 63488 00:35:46.693 }, 00:35:46.693 { 00:35:46.693 "name": "BaseBdev4", 00:35:46.693 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:46.693 "is_configured": true, 00:35:46.693 "data_offset": 2048, 00:35:46.693 "data_size": 63488 00:35:46.693 } 00:35:46.693 ] 00:35:46.693 }' 00:35:46.693 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:46.693 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:46.693 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:46.951 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:46.951 11:47:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:47.884 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:47.884 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:47.884 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:47.884 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:47.884 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:47.884 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:47.884 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:47.884 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:48.143 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:48.143 "name": "raid_bdev1", 00:35:48.143 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:48.143 "strip_size_kb": 64, 00:35:48.143 "state": "online", 00:35:48.143 "raid_level": "raid5f", 00:35:48.143 "superblock": true, 00:35:48.143 "num_base_bdevs": 4, 00:35:48.143 "num_base_bdevs_discovered": 4, 00:35:48.143 "num_base_bdevs_operational": 4, 00:35:48.143 "process": { 00:35:48.143 "type": "rebuild", 00:35:48.143 "target": "spare", 00:35:48.143 "progress": { 00:35:48.143 "blocks": 155520, 00:35:48.143 "percent": 81 00:35:48.143 } 00:35:48.143 }, 00:35:48.143 "base_bdevs_list": [ 00:35:48.143 { 00:35:48.143 "name": "spare", 00:35:48.143 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:48.143 "is_configured": true, 00:35:48.143 "data_offset": 2048, 00:35:48.143 "data_size": 63488 00:35:48.143 }, 00:35:48.143 { 00:35:48.143 "name": "BaseBdev2", 00:35:48.143 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:48.143 "is_configured": true, 00:35:48.143 "data_offset": 2048, 00:35:48.143 "data_size": 63488 00:35:48.143 }, 00:35:48.143 { 00:35:48.143 "name": "BaseBdev3", 00:35:48.143 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:48.143 "is_configured": true, 00:35:48.143 "data_offset": 2048, 00:35:48.143 "data_size": 63488 00:35:48.143 }, 00:35:48.143 { 00:35:48.143 "name": "BaseBdev4", 00:35:48.143 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:48.143 "is_configured": true, 00:35:48.143 "data_offset": 2048, 00:35:48.143 "data_size": 63488 00:35:48.143 } 00:35:48.143 ] 00:35:48.143 }' 00:35:48.143 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:48.143 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:48.143 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:48.143 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:48.143 11:47:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:49.515 11:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:49.515 11:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:49.515 11:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:49.515 11:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:49.515 11:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:49.515 11:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:49.516 11:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.516 11:47:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.516 11:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:49.516 "name": "raid_bdev1", 00:35:49.516 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:49.516 "strip_size_kb": 64, 00:35:49.516 "state": "online", 00:35:49.516 "raid_level": "raid5f", 00:35:49.516 "superblock": true, 00:35:49.516 "num_base_bdevs": 4, 00:35:49.516 "num_base_bdevs_discovered": 4, 00:35:49.516 "num_base_bdevs_operational": 4, 00:35:49.516 "process": { 00:35:49.516 "type": "rebuild", 00:35:49.516 "target": "spare", 00:35:49.516 "progress": { 00:35:49.516 "blocks": 182400, 00:35:49.516 "percent": 95 00:35:49.516 } 00:35:49.516 }, 00:35:49.516 "base_bdevs_list": [ 00:35:49.516 { 00:35:49.516 "name": "spare", 00:35:49.516 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:49.516 "is_configured": true, 00:35:49.516 "data_offset": 2048, 00:35:49.516 "data_size": 63488 00:35:49.516 }, 00:35:49.516 { 00:35:49.516 "name": "BaseBdev2", 00:35:49.516 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:49.516 "is_configured": true, 00:35:49.516 "data_offset": 2048, 00:35:49.516 "data_size": 63488 00:35:49.516 }, 00:35:49.516 { 00:35:49.516 "name": "BaseBdev3", 00:35:49.516 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:49.516 "is_configured": true, 00:35:49.516 "data_offset": 2048, 00:35:49.516 "data_size": 63488 00:35:49.516 }, 00:35:49.516 { 00:35:49.516 "name": "BaseBdev4", 00:35:49.516 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:49.516 "is_configured": true, 00:35:49.516 "data_offset": 2048, 00:35:49.516 "data_size": 63488 00:35:49.516 } 00:35:49.516 ] 00:35:49.516 }' 00:35:49.516 11:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:49.516 11:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:49.516 11:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:49.516 11:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:49.516 11:47:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:50.082 [2024-07-13 11:47:24.584657] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:50.082 [2024-07-13 11:47:24.584730] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:50.082 [2024-07-13 11:47:24.584879] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:50.649 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:50.649 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:50.649 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:50.649 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:50.649 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:50.649 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:50.649 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:50.649 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:50.907 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:50.907 "name": "raid_bdev1", 00:35:50.907 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:50.907 "strip_size_kb": 64, 00:35:50.907 "state": "online", 00:35:50.907 "raid_level": "raid5f", 00:35:50.907 "superblock": true, 00:35:50.907 "num_base_bdevs": 4, 00:35:50.907 "num_base_bdevs_discovered": 4, 00:35:50.907 "num_base_bdevs_operational": 4, 00:35:50.907 "base_bdevs_list": [ 00:35:50.907 { 00:35:50.907 "name": "spare", 00:35:50.907 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:50.907 "is_configured": true, 00:35:50.907 "data_offset": 2048, 00:35:50.907 "data_size": 63488 00:35:50.907 }, 00:35:50.907 { 00:35:50.907 "name": "BaseBdev2", 00:35:50.907 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:50.907 "is_configured": true, 00:35:50.907 "data_offset": 2048, 00:35:50.907 "data_size": 63488 00:35:50.907 }, 00:35:50.907 { 00:35:50.907 "name": "BaseBdev3", 00:35:50.907 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:50.907 "is_configured": true, 00:35:50.907 "data_offset": 2048, 00:35:50.907 "data_size": 63488 00:35:50.908 }, 00:35:50.908 { 00:35:50.908 "name": "BaseBdev4", 00:35:50.908 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:50.908 "is_configured": true, 00:35:50.908 "data_offset": 2048, 00:35:50.908 "data_size": 63488 00:35:50.908 } 00:35:50.908 ] 00:35:50.908 }' 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:50.908 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.178 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:51.178 "name": "raid_bdev1", 00:35:51.178 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:51.178 "strip_size_kb": 64, 00:35:51.178 "state": "online", 00:35:51.178 "raid_level": "raid5f", 00:35:51.178 "superblock": true, 00:35:51.178 "num_base_bdevs": 4, 00:35:51.178 "num_base_bdevs_discovered": 4, 00:35:51.178 "num_base_bdevs_operational": 4, 00:35:51.178 "base_bdevs_list": [ 00:35:51.178 { 00:35:51.178 "name": "spare", 00:35:51.178 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:51.178 "is_configured": true, 00:35:51.178 "data_offset": 2048, 00:35:51.178 "data_size": 63488 00:35:51.178 }, 00:35:51.178 { 00:35:51.179 "name": "BaseBdev2", 00:35:51.179 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:51.179 "is_configured": true, 00:35:51.179 "data_offset": 2048, 00:35:51.179 "data_size": 63488 00:35:51.179 }, 00:35:51.179 { 00:35:51.179 "name": "BaseBdev3", 00:35:51.179 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:51.179 "is_configured": true, 00:35:51.179 "data_offset": 2048, 00:35:51.179 "data_size": 63488 00:35:51.179 }, 00:35:51.179 { 00:35:51.179 "name": "BaseBdev4", 00:35:51.179 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:51.179 "is_configured": true, 00:35:51.179 "data_offset": 2048, 00:35:51.179 "data_size": 63488 00:35:51.179 } 00:35:51.179 ] 00:35:51.179 }' 00:35:51.179 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:51.179 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:51.179 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:51.179 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:51.179 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:51.179 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:51.180 11:47:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.444 11:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:51.444 "name": "raid_bdev1", 00:35:51.444 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:51.444 "strip_size_kb": 64, 00:35:51.444 "state": "online", 00:35:51.444 "raid_level": "raid5f", 00:35:51.444 "superblock": true, 00:35:51.444 "num_base_bdevs": 4, 00:35:51.444 "num_base_bdevs_discovered": 4, 00:35:51.444 "num_base_bdevs_operational": 4, 00:35:51.444 "base_bdevs_list": [ 00:35:51.444 { 00:35:51.444 "name": "spare", 00:35:51.444 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:51.444 "is_configured": true, 00:35:51.444 "data_offset": 2048, 00:35:51.444 "data_size": 63488 00:35:51.444 }, 00:35:51.444 { 00:35:51.444 "name": "BaseBdev2", 00:35:51.444 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:51.444 "is_configured": true, 00:35:51.444 "data_offset": 2048, 00:35:51.444 "data_size": 63488 00:35:51.444 }, 00:35:51.444 { 00:35:51.444 "name": "BaseBdev3", 00:35:51.444 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:51.444 "is_configured": true, 00:35:51.444 "data_offset": 2048, 00:35:51.444 "data_size": 63488 00:35:51.444 }, 00:35:51.444 { 00:35:51.444 "name": "BaseBdev4", 00:35:51.444 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:51.444 "is_configured": true, 00:35:51.444 "data_offset": 2048, 00:35:51.444 "data_size": 63488 00:35:51.444 } 00:35:51.444 ] 00:35:51.444 }' 00:35:51.444 11:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:51.444 11:47:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.378 11:47:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:52.378 [2024-07-13 11:47:27.028595] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:52.378 [2024-07-13 11:47:27.028626] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:52.378 [2024-07-13 11:47:27.028699] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:52.378 [2024-07-13 11:47:27.028794] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:52.378 [2024-07-13 11:47:27.028806] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:35:52.378 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:52.378 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:52.637 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:52.896 /dev/nbd0 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:52.896 1+0 records in 00:35:52.896 1+0 records out 00:35:52.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530745 s, 7.7 MB/s 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:52.896 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:53.155 /dev/nbd1 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:53.155 1+0 records in 00:35:53.155 1+0 records out 00:35:53.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321127 s, 12.8 MB/s 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:53.155 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:53.413 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:53.413 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:53.413 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:35:53.413 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:53.414 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:53.414 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:53.414 11:47:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:53.672 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:35:53.931 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:54.190 11:47:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:54.448 [2024-07-13 11:47:29.071892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:54.448 [2024-07-13 11:47:29.071972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:54.448 [2024-07-13 11:47:29.072024] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:35:54.448 [2024-07-13 11:47:29.072058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:54.448 [2024-07-13 11:47:29.074405] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:54.448 [2024-07-13 11:47:29.074457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:54.448 [2024-07-13 11:47:29.074564] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:54.448 [2024-07-13 11:47:29.074637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:54.448 [2024-07-13 11:47:29.074802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:54.448 [2024-07-13 11:47:29.074927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:54.448 [2024-07-13 11:47:29.075029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:54.448 spare 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:54.448 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:54.449 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.449 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:54.449 [2024-07-13 11:47:29.175123] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:35:54.449 [2024-07-13 11:47:29.175148] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:54.449 [2024-07-13 11:47:29.175248] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d130 00:35:54.449 [2024-07-13 11:47:29.180543] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:35:54.449 [2024-07-13 11:47:29.180568] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:35:54.449 [2024-07-13 11:47:29.180745] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:54.707 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:54.707 "name": "raid_bdev1", 00:35:54.707 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:54.707 "strip_size_kb": 64, 00:35:54.707 "state": "online", 00:35:54.707 "raid_level": "raid5f", 00:35:54.707 "superblock": true, 00:35:54.707 "num_base_bdevs": 4, 00:35:54.707 "num_base_bdevs_discovered": 4, 00:35:54.707 "num_base_bdevs_operational": 4, 00:35:54.707 "base_bdevs_list": [ 00:35:54.707 { 00:35:54.707 "name": "spare", 00:35:54.707 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:54.707 "is_configured": true, 00:35:54.707 "data_offset": 2048, 00:35:54.707 "data_size": 63488 00:35:54.707 }, 00:35:54.707 { 00:35:54.707 "name": "BaseBdev2", 00:35:54.707 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:54.707 "is_configured": true, 00:35:54.707 "data_offset": 2048, 00:35:54.707 "data_size": 63488 00:35:54.707 }, 00:35:54.707 { 00:35:54.707 "name": "BaseBdev3", 00:35:54.707 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:54.707 "is_configured": true, 00:35:54.707 "data_offset": 2048, 00:35:54.707 "data_size": 63488 00:35:54.707 }, 00:35:54.707 { 00:35:54.707 "name": "BaseBdev4", 00:35:54.707 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:54.707 "is_configured": true, 00:35:54.707 "data_offset": 2048, 00:35:54.707 "data_size": 63488 00:35:54.707 } 00:35:54.707 ] 00:35:54.707 }' 00:35:54.707 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:54.707 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:55.274 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:55.274 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:55.274 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:55.274 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:55.274 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:55.274 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:55.274 11:47:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:55.533 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:55.533 "name": "raid_bdev1", 00:35:55.533 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:55.533 "strip_size_kb": 64, 00:35:55.533 "state": "online", 00:35:55.533 "raid_level": "raid5f", 00:35:55.533 "superblock": true, 00:35:55.533 "num_base_bdevs": 4, 00:35:55.533 "num_base_bdevs_discovered": 4, 00:35:55.533 "num_base_bdevs_operational": 4, 00:35:55.533 "base_bdevs_list": [ 00:35:55.533 { 00:35:55.533 "name": "spare", 00:35:55.533 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:55.533 "is_configured": true, 00:35:55.533 "data_offset": 2048, 00:35:55.533 "data_size": 63488 00:35:55.533 }, 00:35:55.533 { 00:35:55.533 "name": "BaseBdev2", 00:35:55.533 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:55.533 "is_configured": true, 00:35:55.533 "data_offset": 2048, 00:35:55.533 "data_size": 63488 00:35:55.533 }, 00:35:55.533 { 00:35:55.533 "name": "BaseBdev3", 00:35:55.533 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:55.533 "is_configured": true, 00:35:55.533 "data_offset": 2048, 00:35:55.533 "data_size": 63488 00:35:55.533 }, 00:35:55.533 { 00:35:55.533 "name": "BaseBdev4", 00:35:55.533 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:55.533 "is_configured": true, 00:35:55.533 "data_offset": 2048, 00:35:55.533 "data_size": 63488 00:35:55.533 } 00:35:55.533 ] 00:35:55.533 }' 00:35:55.533 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:55.533 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:55.533 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:55.533 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:55.533 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:55.533 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:55.791 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:35:55.791 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:56.050 [2024-07-13 11:47:30.711523] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:56.050 11:47:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:56.308 11:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:56.308 "name": "raid_bdev1", 00:35:56.308 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:56.308 "strip_size_kb": 64, 00:35:56.308 "state": "online", 00:35:56.308 "raid_level": "raid5f", 00:35:56.308 "superblock": true, 00:35:56.308 "num_base_bdevs": 4, 00:35:56.308 "num_base_bdevs_discovered": 3, 00:35:56.308 "num_base_bdevs_operational": 3, 00:35:56.308 "base_bdevs_list": [ 00:35:56.308 { 00:35:56.308 "name": null, 00:35:56.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:56.308 "is_configured": false, 00:35:56.309 "data_offset": 2048, 00:35:56.309 "data_size": 63488 00:35:56.309 }, 00:35:56.309 { 00:35:56.309 "name": "BaseBdev2", 00:35:56.309 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:56.309 "is_configured": true, 00:35:56.309 "data_offset": 2048, 00:35:56.309 "data_size": 63488 00:35:56.309 }, 00:35:56.309 { 00:35:56.309 "name": "BaseBdev3", 00:35:56.309 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:56.309 "is_configured": true, 00:35:56.309 "data_offset": 2048, 00:35:56.309 "data_size": 63488 00:35:56.309 }, 00:35:56.309 { 00:35:56.309 "name": "BaseBdev4", 00:35:56.309 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:56.309 "is_configured": true, 00:35:56.309 "data_offset": 2048, 00:35:56.309 "data_size": 63488 00:35:56.309 } 00:35:56.309 ] 00:35:56.309 }' 00:35:56.309 11:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:56.309 11:47:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:56.875 11:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:57.134 [2024-07-13 11:47:31.791702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:57.134 [2024-07-13 11:47:31.791845] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:57.134 [2024-07-13 11:47:31.791861] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:57.134 [2024-07-13 11:47:31.791913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:57.134 [2024-07-13 11:47:31.801784] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d2d0 00:35:57.134 [2024-07-13 11:47:31.808318] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:57.134 11:47:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:35:58.071 11:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:58.071 11:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:58.071 11:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:58.071 11:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:58.071 11:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:58.071 11:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.071 11:47:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.329 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:58.329 "name": "raid_bdev1", 00:35:58.329 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:58.329 "strip_size_kb": 64, 00:35:58.329 "state": "online", 00:35:58.329 "raid_level": "raid5f", 00:35:58.329 "superblock": true, 00:35:58.329 "num_base_bdevs": 4, 00:35:58.329 "num_base_bdevs_discovered": 4, 00:35:58.329 "num_base_bdevs_operational": 4, 00:35:58.329 "process": { 00:35:58.329 "type": "rebuild", 00:35:58.329 "target": "spare", 00:35:58.329 "progress": { 00:35:58.329 "blocks": 23040, 00:35:58.329 "percent": 12 00:35:58.329 } 00:35:58.329 }, 00:35:58.329 "base_bdevs_list": [ 00:35:58.329 { 00:35:58.329 "name": "spare", 00:35:58.329 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:35:58.329 "is_configured": true, 00:35:58.329 "data_offset": 2048, 00:35:58.329 "data_size": 63488 00:35:58.329 }, 00:35:58.329 { 00:35:58.329 "name": "BaseBdev2", 00:35:58.329 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:58.329 "is_configured": true, 00:35:58.329 "data_offset": 2048, 00:35:58.329 "data_size": 63488 00:35:58.329 }, 00:35:58.329 { 00:35:58.329 "name": "BaseBdev3", 00:35:58.329 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:58.329 "is_configured": true, 00:35:58.329 "data_offset": 2048, 00:35:58.329 "data_size": 63488 00:35:58.329 }, 00:35:58.329 { 00:35:58.329 "name": "BaseBdev4", 00:35:58.329 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:58.329 "is_configured": true, 00:35:58.329 "data_offset": 2048, 00:35:58.329 "data_size": 63488 00:35:58.329 } 00:35:58.329 ] 00:35:58.329 }' 00:35:58.329 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:58.590 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:58.590 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:58.590 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:58.590 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:58.849 [2024-07-13 11:47:33.445366] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:58.849 [2024-07-13 11:47:33.520886] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:58.849 [2024-07-13 11:47:33.520960] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:58.849 [2024-07-13 11:47:33.520979] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:58.849 [2024-07-13 11:47:33.520986] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.849 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:59.107 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:59.107 "name": "raid_bdev1", 00:35:59.107 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:35:59.107 "strip_size_kb": 64, 00:35:59.107 "state": "online", 00:35:59.107 "raid_level": "raid5f", 00:35:59.107 "superblock": true, 00:35:59.107 "num_base_bdevs": 4, 00:35:59.107 "num_base_bdevs_discovered": 3, 00:35:59.107 "num_base_bdevs_operational": 3, 00:35:59.107 "base_bdevs_list": [ 00:35:59.107 { 00:35:59.107 "name": null, 00:35:59.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:59.107 "is_configured": false, 00:35:59.107 "data_offset": 2048, 00:35:59.107 "data_size": 63488 00:35:59.107 }, 00:35:59.107 { 00:35:59.107 "name": "BaseBdev2", 00:35:59.107 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:35:59.107 "is_configured": true, 00:35:59.107 "data_offset": 2048, 00:35:59.107 "data_size": 63488 00:35:59.107 }, 00:35:59.107 { 00:35:59.107 "name": "BaseBdev3", 00:35:59.107 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:35:59.107 "is_configured": true, 00:35:59.107 "data_offset": 2048, 00:35:59.107 "data_size": 63488 00:35:59.107 }, 00:35:59.107 { 00:35:59.107 "name": "BaseBdev4", 00:35:59.107 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:35:59.107 "is_configured": true, 00:35:59.107 "data_offset": 2048, 00:35:59.107 "data_size": 63488 00:35:59.107 } 00:35:59.107 ] 00:35:59.107 }' 00:35:59.107 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:59.107 11:47:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:00.042 11:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:00.042 [2024-07-13 11:47:34.707970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:00.042 [2024-07-13 11:47:34.708037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:00.042 [2024-07-13 11:47:34.708077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:36:00.042 [2024-07-13 11:47:34.708100] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:00.042 [2024-07-13 11:47:34.708654] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:00.042 [2024-07-13 11:47:34.708688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:00.042 [2024-07-13 11:47:34.708792] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:00.042 [2024-07-13 11:47:34.708810] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:00.042 [2024-07-13 11:47:34.708818] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:00.042 [2024-07-13 11:47:34.708859] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:00.042 [2024-07-13 11:47:34.718237] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d610 00:36:00.042 spare 00:36:00.042 [2024-07-13 11:47:34.724974] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:00.042 11:47:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:36:01.417 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:01.417 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:01.417 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:01.417 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:01.417 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:01.417 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:01.417 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.417 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:01.417 "name": "raid_bdev1", 00:36:01.417 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:36:01.418 "strip_size_kb": 64, 00:36:01.418 "state": "online", 00:36:01.418 "raid_level": "raid5f", 00:36:01.418 "superblock": true, 00:36:01.418 "num_base_bdevs": 4, 00:36:01.418 "num_base_bdevs_discovered": 4, 00:36:01.418 "num_base_bdevs_operational": 4, 00:36:01.418 "process": { 00:36:01.418 "type": "rebuild", 00:36:01.418 "target": "spare", 00:36:01.418 "progress": { 00:36:01.418 "blocks": 21120, 00:36:01.418 "percent": 11 00:36:01.418 } 00:36:01.418 }, 00:36:01.418 "base_bdevs_list": [ 00:36:01.418 { 00:36:01.418 "name": "spare", 00:36:01.418 "uuid": "e6226fbb-dfea-582b-9dd1-6b0b36283dd9", 00:36:01.418 "is_configured": true, 00:36:01.418 "data_offset": 2048, 00:36:01.418 "data_size": 63488 00:36:01.418 }, 00:36:01.418 { 00:36:01.418 "name": "BaseBdev2", 00:36:01.418 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:36:01.418 "is_configured": true, 00:36:01.418 "data_offset": 2048, 00:36:01.418 "data_size": 63488 00:36:01.418 }, 00:36:01.418 { 00:36:01.418 "name": "BaseBdev3", 00:36:01.418 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:36:01.418 "is_configured": true, 00:36:01.418 "data_offset": 2048, 00:36:01.418 "data_size": 63488 00:36:01.418 }, 00:36:01.418 { 00:36:01.418 "name": "BaseBdev4", 00:36:01.418 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:36:01.418 "is_configured": true, 00:36:01.418 "data_offset": 2048, 00:36:01.418 "data_size": 63488 00:36:01.418 } 00:36:01.418 ] 00:36:01.418 }' 00:36:01.418 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:01.418 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:01.418 11:47:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:01.418 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:01.418 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:01.676 [2024-07-13 11:47:36.277950] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:01.676 [2024-07-13 11:47:36.336798] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:01.676 [2024-07-13 11:47:36.336862] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:01.676 [2024-07-13 11:47:36.336880] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:01.676 [2024-07-13 11:47:36.336888] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.676 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:01.934 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:01.934 "name": "raid_bdev1", 00:36:01.934 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:36:01.934 "strip_size_kb": 64, 00:36:01.934 "state": "online", 00:36:01.934 "raid_level": "raid5f", 00:36:01.934 "superblock": true, 00:36:01.934 "num_base_bdevs": 4, 00:36:01.934 "num_base_bdevs_discovered": 3, 00:36:01.934 "num_base_bdevs_operational": 3, 00:36:01.934 "base_bdevs_list": [ 00:36:01.934 { 00:36:01.934 "name": null, 00:36:01.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:01.934 "is_configured": false, 00:36:01.934 "data_offset": 2048, 00:36:01.934 "data_size": 63488 00:36:01.934 }, 00:36:01.934 { 00:36:01.934 "name": "BaseBdev2", 00:36:01.934 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:36:01.934 "is_configured": true, 00:36:01.934 "data_offset": 2048, 00:36:01.934 "data_size": 63488 00:36:01.934 }, 00:36:01.934 { 00:36:01.934 "name": "BaseBdev3", 00:36:01.934 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:36:01.934 "is_configured": true, 00:36:01.934 "data_offset": 2048, 00:36:01.934 "data_size": 63488 00:36:01.934 }, 00:36:01.934 { 00:36:01.934 "name": "BaseBdev4", 00:36:01.934 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:36:01.934 "is_configured": true, 00:36:01.934 "data_offset": 2048, 00:36:01.934 "data_size": 63488 00:36:01.934 } 00:36:01.934 ] 00:36:01.934 }' 00:36:01.934 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:01.934 11:47:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:02.500 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:02.500 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:02.500 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:02.500 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:02.500 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:02.500 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:02.500 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:02.759 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:02.759 "name": "raid_bdev1", 00:36:02.759 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:36:02.759 "strip_size_kb": 64, 00:36:02.759 "state": "online", 00:36:02.759 "raid_level": "raid5f", 00:36:02.759 "superblock": true, 00:36:02.759 "num_base_bdevs": 4, 00:36:02.759 "num_base_bdevs_discovered": 3, 00:36:02.759 "num_base_bdevs_operational": 3, 00:36:02.759 "base_bdevs_list": [ 00:36:02.759 { 00:36:02.759 "name": null, 00:36:02.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:02.759 "is_configured": false, 00:36:02.759 "data_offset": 2048, 00:36:02.759 "data_size": 63488 00:36:02.759 }, 00:36:02.759 { 00:36:02.759 "name": "BaseBdev2", 00:36:02.759 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:36:02.759 "is_configured": true, 00:36:02.759 "data_offset": 2048, 00:36:02.759 "data_size": 63488 00:36:02.759 }, 00:36:02.759 { 00:36:02.759 "name": "BaseBdev3", 00:36:02.759 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:36:02.759 "is_configured": true, 00:36:02.759 "data_offset": 2048, 00:36:02.759 "data_size": 63488 00:36:02.759 }, 00:36:02.759 { 00:36:02.759 "name": "BaseBdev4", 00:36:02.759 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:36:02.759 "is_configured": true, 00:36:02.759 "data_offset": 2048, 00:36:02.759 "data_size": 63488 00:36:02.759 } 00:36:02.759 ] 00:36:02.759 }' 00:36:02.759 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:02.759 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:02.759 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:02.759 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:02.759 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:03.017 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:03.275 [2024-07-13 11:47:37.979926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:03.275 [2024-07-13 11:47:37.979989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:03.275 [2024-07-13 11:47:37.980036] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:36:03.275 [2024-07-13 11:47:37.980058] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:03.275 [2024-07-13 11:47:37.980563] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:03.275 [2024-07-13 11:47:37.980601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:03.275 [2024-07-13 11:47:37.980712] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:03.275 [2024-07-13 11:47:37.980730] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:03.275 [2024-07-13 11:47:37.980747] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:03.275 BaseBdev1 00:36:03.275 11:47:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:04.650 11:47:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:04.650 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:04.650 "name": "raid_bdev1", 00:36:04.650 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:36:04.650 "strip_size_kb": 64, 00:36:04.650 "state": "online", 00:36:04.650 "raid_level": "raid5f", 00:36:04.650 "superblock": true, 00:36:04.650 "num_base_bdevs": 4, 00:36:04.650 "num_base_bdevs_discovered": 3, 00:36:04.650 "num_base_bdevs_operational": 3, 00:36:04.650 "base_bdevs_list": [ 00:36:04.650 { 00:36:04.650 "name": null, 00:36:04.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:04.650 "is_configured": false, 00:36:04.650 "data_offset": 2048, 00:36:04.650 "data_size": 63488 00:36:04.650 }, 00:36:04.650 { 00:36:04.650 "name": "BaseBdev2", 00:36:04.650 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:36:04.650 "is_configured": true, 00:36:04.650 "data_offset": 2048, 00:36:04.650 "data_size": 63488 00:36:04.650 }, 00:36:04.650 { 00:36:04.650 "name": "BaseBdev3", 00:36:04.650 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:36:04.650 "is_configured": true, 00:36:04.650 "data_offset": 2048, 00:36:04.650 "data_size": 63488 00:36:04.650 }, 00:36:04.650 { 00:36:04.650 "name": "BaseBdev4", 00:36:04.650 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:36:04.650 "is_configured": true, 00:36:04.650 "data_offset": 2048, 00:36:04.650 "data_size": 63488 00:36:04.650 } 00:36:04.650 ] 00:36:04.650 }' 00:36:04.650 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:04.650 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.216 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:05.216 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:05.216 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:05.216 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:05.216 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:05.216 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:05.216 11:47:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:05.474 "name": "raid_bdev1", 00:36:05.474 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:36:05.474 "strip_size_kb": 64, 00:36:05.474 "state": "online", 00:36:05.474 "raid_level": "raid5f", 00:36:05.474 "superblock": true, 00:36:05.474 "num_base_bdevs": 4, 00:36:05.474 "num_base_bdevs_discovered": 3, 00:36:05.474 "num_base_bdevs_operational": 3, 00:36:05.474 "base_bdevs_list": [ 00:36:05.474 { 00:36:05.474 "name": null, 00:36:05.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:05.474 "is_configured": false, 00:36:05.474 "data_offset": 2048, 00:36:05.474 "data_size": 63488 00:36:05.474 }, 00:36:05.474 { 00:36:05.474 "name": "BaseBdev2", 00:36:05.474 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:36:05.474 "is_configured": true, 00:36:05.474 "data_offset": 2048, 00:36:05.474 "data_size": 63488 00:36:05.474 }, 00:36:05.474 { 00:36:05.474 "name": "BaseBdev3", 00:36:05.474 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:36:05.474 "is_configured": true, 00:36:05.474 "data_offset": 2048, 00:36:05.474 "data_size": 63488 00:36:05.474 }, 00:36:05.474 { 00:36:05.474 "name": "BaseBdev4", 00:36:05.474 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:36:05.474 "is_configured": true, 00:36:05.474 "data_offset": 2048, 00:36:05.474 "data_size": 63488 00:36:05.474 } 00:36:05.474 ] 00:36:05.474 }' 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:05.474 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:05.475 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:05.475 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:05.475 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:05.733 [2024-07-13 11:47:40.272311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:05.733 [2024-07-13 11:47:40.272435] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:05.733 [2024-07-13 11:47:40.272450] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:05.733 request: 00:36:05.733 { 00:36:05.733 "base_bdev": "BaseBdev1", 00:36:05.733 "raid_bdev": "raid_bdev1", 00:36:05.733 "method": "bdev_raid_add_base_bdev", 00:36:05.733 "req_id": 1 00:36:05.733 } 00:36:05.733 Got JSON-RPC error response 00:36:05.733 response: 00:36:05.733 { 00:36:05.733 "code": -22, 00:36:05.733 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:05.733 } 00:36:05.733 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:36:05.733 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:05.733 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:05.733 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:05.733 11:47:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:06.668 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:06.929 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:06.929 "name": "raid_bdev1", 00:36:06.929 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:36:06.929 "strip_size_kb": 64, 00:36:06.929 "state": "online", 00:36:06.929 "raid_level": "raid5f", 00:36:06.929 "superblock": true, 00:36:06.929 "num_base_bdevs": 4, 00:36:06.929 "num_base_bdevs_discovered": 3, 00:36:06.929 "num_base_bdevs_operational": 3, 00:36:06.929 "base_bdevs_list": [ 00:36:06.929 { 00:36:06.929 "name": null, 00:36:06.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:06.929 "is_configured": false, 00:36:06.929 "data_offset": 2048, 00:36:06.929 "data_size": 63488 00:36:06.929 }, 00:36:06.929 { 00:36:06.929 "name": "BaseBdev2", 00:36:06.929 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:36:06.929 "is_configured": true, 00:36:06.929 "data_offset": 2048, 00:36:06.929 "data_size": 63488 00:36:06.929 }, 00:36:06.929 { 00:36:06.929 "name": "BaseBdev3", 00:36:06.929 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:36:06.929 "is_configured": true, 00:36:06.929 "data_offset": 2048, 00:36:06.929 "data_size": 63488 00:36:06.929 }, 00:36:06.929 { 00:36:06.929 "name": "BaseBdev4", 00:36:06.929 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:36:06.929 "is_configured": true, 00:36:06.929 "data_offset": 2048, 00:36:06.929 "data_size": 63488 00:36:06.929 } 00:36:06.929 ] 00:36:06.929 }' 00:36:06.929 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:06.929 11:47:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.522 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:07.522 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:07.522 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:07.522 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:07.522 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:07.522 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:07.522 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:07.781 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:07.781 "name": "raid_bdev1", 00:36:07.781 "uuid": "30966695-4e29-4155-88ff-e8cf7ca5a8ee", 00:36:07.781 "strip_size_kb": 64, 00:36:07.781 "state": "online", 00:36:07.781 "raid_level": "raid5f", 00:36:07.781 "superblock": true, 00:36:07.781 "num_base_bdevs": 4, 00:36:07.781 "num_base_bdevs_discovered": 3, 00:36:07.781 "num_base_bdevs_operational": 3, 00:36:07.781 "base_bdevs_list": [ 00:36:07.781 { 00:36:07.781 "name": null, 00:36:07.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.781 "is_configured": false, 00:36:07.781 "data_offset": 2048, 00:36:07.781 "data_size": 63488 00:36:07.781 }, 00:36:07.781 { 00:36:07.781 "name": "BaseBdev2", 00:36:07.781 "uuid": "6d2f6cad-ab46-52e4-903c-36a5ef4cc1bf", 00:36:07.781 "is_configured": true, 00:36:07.781 "data_offset": 2048, 00:36:07.781 "data_size": 63488 00:36:07.781 }, 00:36:07.781 { 00:36:07.781 "name": "BaseBdev3", 00:36:07.781 "uuid": "5d7d18a8-aa20-5c93-afe2-ab5ee1b3d09e", 00:36:07.781 "is_configured": true, 00:36:07.781 "data_offset": 2048, 00:36:07.781 "data_size": 63488 00:36:07.781 }, 00:36:07.781 { 00:36:07.781 "name": "BaseBdev4", 00:36:07.781 "uuid": "3223eeec-2f23-574e-b994-eafd0d38fa08", 00:36:07.781 "is_configured": true, 00:36:07.781 "data_offset": 2048, 00:36:07.781 "data_size": 63488 00:36:07.781 } 00:36:07.781 ] 00:36:07.781 }' 00:36:07.781 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 159215 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 159215 ']' 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 159215 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 159215 00:36:08.040 killing process with pid 159215 00:36:08.040 Received shutdown signal, test time was about 60.000000 seconds 00:36:08.040 00:36:08.040 Latency(us) 00:36:08.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.040 =================================================================================================================== 00:36:08.040 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 159215' 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 159215 00:36:08.040 11:47:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 159215 00:36:08.040 [2024-07-13 11:47:42.640754] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:08.040 [2024-07-13 11:47:42.640856] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:08.040 [2024-07-13 11:47:42.640926] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:08.040 [2024-07-13 11:47:42.640937] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:36:08.301 [2024-07-13 11:47:42.974640] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:09.237 ************************************ 00:36:09.237 END TEST raid5f_rebuild_test_sb 00:36:09.238 ************************************ 00:36:09.238 11:47:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:36:09.238 00:36:09.238 real 0m39.716s 00:36:09.238 user 1m1.471s 00:36:09.238 sys 0m3.661s 00:36:09.238 11:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:09.238 11:47:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.495 11:47:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:09.496 11:47:44 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:36:09.496 11:47:44 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:36:09.496 11:47:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:36:09.496 11:47:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.496 11:47:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:09.496 ************************************ 00:36:09.496 START TEST raid_state_function_test_sb_4k 00:36:09.496 ************************************ 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=160281 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 160281' 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:09.496 Process raid pid: 160281 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 160281 /var/tmp/spdk-raid.sock 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 160281 ']' 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:09.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:09.496 11:47:44 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:09.496 [2024-07-13 11:47:44.125242] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:09.496 [2024-07-13 11:47:44.125416] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:09.754 [2024-07-13 11:47:44.281970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.754 [2024-07-13 11:47:44.466005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:10.013 [2024-07-13 11:47:44.654681] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:10.271 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:10.271 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:36:10.271 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:10.529 [2024-07-13 11:47:45.228483] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:10.529 [2024-07-13 11:47:45.228573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:10.529 [2024-07-13 11:47:45.228594] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:10.529 [2024-07-13 11:47:45.228622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:10.529 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:10.787 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:10.787 "name": "Existed_Raid", 00:36:10.787 "uuid": "7f951ebc-ebf1-4793-96fe-6a4980d5c454", 00:36:10.787 "strip_size_kb": 0, 00:36:10.787 "state": "configuring", 00:36:10.787 "raid_level": "raid1", 00:36:10.787 "superblock": true, 00:36:10.787 "num_base_bdevs": 2, 00:36:10.787 "num_base_bdevs_discovered": 0, 00:36:10.787 "num_base_bdevs_operational": 2, 00:36:10.787 "base_bdevs_list": [ 00:36:10.787 { 00:36:10.787 "name": "BaseBdev1", 00:36:10.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.787 "is_configured": false, 00:36:10.787 "data_offset": 0, 00:36:10.787 "data_size": 0 00:36:10.787 }, 00:36:10.787 { 00:36:10.787 "name": "BaseBdev2", 00:36:10.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.787 "is_configured": false, 00:36:10.787 "data_offset": 0, 00:36:10.787 "data_size": 0 00:36:10.787 } 00:36:10.787 ] 00:36:10.787 }' 00:36:10.787 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:10.787 11:47:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:11.720 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:11.720 [2024-07-13 11:47:46.360486] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:11.720 [2024-07-13 11:47:46.360528] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:36:11.720 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:11.978 [2024-07-13 11:47:46.552552] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:11.978 [2024-07-13 11:47:46.552596] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:11.978 [2024-07-13 11:47:46.552607] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:11.978 [2024-07-13 11:47:46.552631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:11.978 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:36:12.236 [2024-07-13 11:47:46.821549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:12.236 BaseBdev1 00:36:12.236 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:12.236 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:36:12.236 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:12.236 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:36:12.236 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:12.236 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:12.236 11:47:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:12.493 [ 00:36:12.493 { 00:36:12.493 "name": "BaseBdev1", 00:36:12.493 "aliases": [ 00:36:12.493 "ade49dc2-f4d3-4b05-9162-baf634cab3f4" 00:36:12.493 ], 00:36:12.493 "product_name": "Malloc disk", 00:36:12.493 "block_size": 4096, 00:36:12.493 "num_blocks": 8192, 00:36:12.493 "uuid": "ade49dc2-f4d3-4b05-9162-baf634cab3f4", 00:36:12.493 "assigned_rate_limits": { 00:36:12.493 "rw_ios_per_sec": 0, 00:36:12.493 "rw_mbytes_per_sec": 0, 00:36:12.493 "r_mbytes_per_sec": 0, 00:36:12.493 "w_mbytes_per_sec": 0 00:36:12.493 }, 00:36:12.493 "claimed": true, 00:36:12.493 "claim_type": "exclusive_write", 00:36:12.493 "zoned": false, 00:36:12.493 "supported_io_types": { 00:36:12.493 "read": true, 00:36:12.493 "write": true, 00:36:12.493 "unmap": true, 00:36:12.493 "flush": true, 00:36:12.493 "reset": true, 00:36:12.493 "nvme_admin": false, 00:36:12.493 "nvme_io": false, 00:36:12.493 "nvme_io_md": false, 00:36:12.493 "write_zeroes": true, 00:36:12.493 "zcopy": true, 00:36:12.493 "get_zone_info": false, 00:36:12.493 "zone_management": false, 00:36:12.493 "zone_append": false, 00:36:12.493 "compare": false, 00:36:12.493 "compare_and_write": false, 00:36:12.493 "abort": true, 00:36:12.493 "seek_hole": false, 00:36:12.493 "seek_data": false, 00:36:12.493 "copy": true, 00:36:12.493 "nvme_iov_md": false 00:36:12.493 }, 00:36:12.493 "memory_domains": [ 00:36:12.493 { 00:36:12.493 "dma_device_id": "system", 00:36:12.493 "dma_device_type": 1 00:36:12.493 }, 00:36:12.493 { 00:36:12.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:12.493 "dma_device_type": 2 00:36:12.493 } 00:36:12.493 ], 00:36:12.493 "driver_specific": {} 00:36:12.493 } 00:36:12.493 ] 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:12.493 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:12.750 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:12.750 "name": "Existed_Raid", 00:36:12.750 "uuid": "859d852a-4c28-43be-bf84-57bbbf7a082a", 00:36:12.750 "strip_size_kb": 0, 00:36:12.750 "state": "configuring", 00:36:12.750 "raid_level": "raid1", 00:36:12.750 "superblock": true, 00:36:12.750 "num_base_bdevs": 2, 00:36:12.750 "num_base_bdevs_discovered": 1, 00:36:12.750 "num_base_bdevs_operational": 2, 00:36:12.750 "base_bdevs_list": [ 00:36:12.750 { 00:36:12.750 "name": "BaseBdev1", 00:36:12.750 "uuid": "ade49dc2-f4d3-4b05-9162-baf634cab3f4", 00:36:12.750 "is_configured": true, 00:36:12.750 "data_offset": 256, 00:36:12.750 "data_size": 7936 00:36:12.750 }, 00:36:12.750 { 00:36:12.750 "name": "BaseBdev2", 00:36:12.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.750 "is_configured": false, 00:36:12.750 "data_offset": 0, 00:36:12.750 "data_size": 0 00:36:12.750 } 00:36:12.750 ] 00:36:12.750 }' 00:36:12.750 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:12.750 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:13.315 11:47:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:13.572 [2024-07-13 11:47:48.233811] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:13.572 [2024-07-13 11:47:48.233853] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:36:13.572 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:13.830 [2024-07-13 11:47:48.425868] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:13.830 [2024-07-13 11:47:48.427384] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:13.830 [2024-07-13 11:47:48.427437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:13.830 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:14.089 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:14.089 "name": "Existed_Raid", 00:36:14.089 "uuid": "8c195d52-f66c-44f3-b3d6-2cbb137b2b77", 00:36:14.089 "strip_size_kb": 0, 00:36:14.089 "state": "configuring", 00:36:14.089 "raid_level": "raid1", 00:36:14.089 "superblock": true, 00:36:14.089 "num_base_bdevs": 2, 00:36:14.089 "num_base_bdevs_discovered": 1, 00:36:14.089 "num_base_bdevs_operational": 2, 00:36:14.089 "base_bdevs_list": [ 00:36:14.089 { 00:36:14.089 "name": "BaseBdev1", 00:36:14.089 "uuid": "ade49dc2-f4d3-4b05-9162-baf634cab3f4", 00:36:14.089 "is_configured": true, 00:36:14.089 "data_offset": 256, 00:36:14.089 "data_size": 7936 00:36:14.089 }, 00:36:14.089 { 00:36:14.089 "name": "BaseBdev2", 00:36:14.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:14.089 "is_configured": false, 00:36:14.089 "data_offset": 0, 00:36:14.089 "data_size": 0 00:36:14.089 } 00:36:14.089 ] 00:36:14.089 }' 00:36:14.089 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:14.089 11:47:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:14.657 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:36:14.916 [2024-07-13 11:47:49.551291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:14.916 [2024-07-13 11:47:49.551519] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:36:14.916 [2024-07-13 11:47:49.551535] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:14.916 [2024-07-13 11:47:49.551657] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:36:14.916 BaseBdev2 00:36:14.916 [2024-07-13 11:47:49.551987] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:36:14.916 [2024-07-13 11:47:49.552002] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:36:14.916 [2024-07-13 11:47:49.552150] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:14.916 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:36:14.916 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:36:14.916 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:36:14.916 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:36:14.916 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:36:14.916 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:36:14.916 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:15.174 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:15.175 [ 00:36:15.175 { 00:36:15.175 "name": "BaseBdev2", 00:36:15.175 "aliases": [ 00:36:15.175 "5e495e9e-8c5b-4ae1-8b0c-2d073e2d551b" 00:36:15.175 ], 00:36:15.175 "product_name": "Malloc disk", 00:36:15.175 "block_size": 4096, 00:36:15.175 "num_blocks": 8192, 00:36:15.175 "uuid": "5e495e9e-8c5b-4ae1-8b0c-2d073e2d551b", 00:36:15.175 "assigned_rate_limits": { 00:36:15.175 "rw_ios_per_sec": 0, 00:36:15.175 "rw_mbytes_per_sec": 0, 00:36:15.175 "r_mbytes_per_sec": 0, 00:36:15.175 "w_mbytes_per_sec": 0 00:36:15.175 }, 00:36:15.175 "claimed": true, 00:36:15.175 "claim_type": "exclusive_write", 00:36:15.175 "zoned": false, 00:36:15.175 "supported_io_types": { 00:36:15.175 "read": true, 00:36:15.175 "write": true, 00:36:15.175 "unmap": true, 00:36:15.175 "flush": true, 00:36:15.175 "reset": true, 00:36:15.175 "nvme_admin": false, 00:36:15.175 "nvme_io": false, 00:36:15.175 "nvme_io_md": false, 00:36:15.175 "write_zeroes": true, 00:36:15.175 "zcopy": true, 00:36:15.175 "get_zone_info": false, 00:36:15.175 "zone_management": false, 00:36:15.175 "zone_append": false, 00:36:15.175 "compare": false, 00:36:15.175 "compare_and_write": false, 00:36:15.175 "abort": true, 00:36:15.175 "seek_hole": false, 00:36:15.175 "seek_data": false, 00:36:15.175 "copy": true, 00:36:15.175 "nvme_iov_md": false 00:36:15.175 }, 00:36:15.175 "memory_domains": [ 00:36:15.175 { 00:36:15.175 "dma_device_id": "system", 00:36:15.175 "dma_device_type": 1 00:36:15.175 }, 00:36:15.175 { 00:36:15.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:15.175 "dma_device_type": 2 00:36:15.175 } 00:36:15.175 ], 00:36:15.175 "driver_specific": {} 00:36:15.175 } 00:36:15.175 ] 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:15.175 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:15.433 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:15.433 11:47:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:15.433 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:15.433 "name": "Existed_Raid", 00:36:15.433 "uuid": "8c195d52-f66c-44f3-b3d6-2cbb137b2b77", 00:36:15.433 "strip_size_kb": 0, 00:36:15.433 "state": "online", 00:36:15.433 "raid_level": "raid1", 00:36:15.433 "superblock": true, 00:36:15.433 "num_base_bdevs": 2, 00:36:15.433 "num_base_bdevs_discovered": 2, 00:36:15.433 "num_base_bdevs_operational": 2, 00:36:15.433 "base_bdevs_list": [ 00:36:15.433 { 00:36:15.433 "name": "BaseBdev1", 00:36:15.433 "uuid": "ade49dc2-f4d3-4b05-9162-baf634cab3f4", 00:36:15.433 "is_configured": true, 00:36:15.433 "data_offset": 256, 00:36:15.434 "data_size": 7936 00:36:15.434 }, 00:36:15.434 { 00:36:15.434 "name": "BaseBdev2", 00:36:15.434 "uuid": "5e495e9e-8c5b-4ae1-8b0c-2d073e2d551b", 00:36:15.434 "is_configured": true, 00:36:15.434 "data_offset": 256, 00:36:15.434 "data_size": 7936 00:36:15.434 } 00:36:15.434 ] 00:36:15.434 }' 00:36:15.434 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:15.434 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.369 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:36:16.369 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:16.369 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:16.369 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:16.369 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:16.369 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:36:16.369 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:16.369 11:47:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:16.369 [2024-07-13 11:47:51.043739] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:16.369 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:16.369 "name": "Existed_Raid", 00:36:16.369 "aliases": [ 00:36:16.369 "8c195d52-f66c-44f3-b3d6-2cbb137b2b77" 00:36:16.369 ], 00:36:16.369 "product_name": "Raid Volume", 00:36:16.369 "block_size": 4096, 00:36:16.369 "num_blocks": 7936, 00:36:16.369 "uuid": "8c195d52-f66c-44f3-b3d6-2cbb137b2b77", 00:36:16.369 "assigned_rate_limits": { 00:36:16.369 "rw_ios_per_sec": 0, 00:36:16.369 "rw_mbytes_per_sec": 0, 00:36:16.369 "r_mbytes_per_sec": 0, 00:36:16.369 "w_mbytes_per_sec": 0 00:36:16.369 }, 00:36:16.369 "claimed": false, 00:36:16.369 "zoned": false, 00:36:16.369 "supported_io_types": { 00:36:16.369 "read": true, 00:36:16.369 "write": true, 00:36:16.369 "unmap": false, 00:36:16.369 "flush": false, 00:36:16.369 "reset": true, 00:36:16.369 "nvme_admin": false, 00:36:16.369 "nvme_io": false, 00:36:16.369 "nvme_io_md": false, 00:36:16.369 "write_zeroes": true, 00:36:16.369 "zcopy": false, 00:36:16.369 "get_zone_info": false, 00:36:16.369 "zone_management": false, 00:36:16.370 "zone_append": false, 00:36:16.370 "compare": false, 00:36:16.370 "compare_and_write": false, 00:36:16.370 "abort": false, 00:36:16.370 "seek_hole": false, 00:36:16.370 "seek_data": false, 00:36:16.370 "copy": false, 00:36:16.370 "nvme_iov_md": false 00:36:16.370 }, 00:36:16.370 "memory_domains": [ 00:36:16.370 { 00:36:16.370 "dma_device_id": "system", 00:36:16.370 "dma_device_type": 1 00:36:16.370 }, 00:36:16.370 { 00:36:16.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.370 "dma_device_type": 2 00:36:16.370 }, 00:36:16.370 { 00:36:16.370 "dma_device_id": "system", 00:36:16.370 "dma_device_type": 1 00:36:16.370 }, 00:36:16.370 { 00:36:16.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.370 "dma_device_type": 2 00:36:16.370 } 00:36:16.370 ], 00:36:16.370 "driver_specific": { 00:36:16.370 "raid": { 00:36:16.370 "uuid": "8c195d52-f66c-44f3-b3d6-2cbb137b2b77", 00:36:16.370 "strip_size_kb": 0, 00:36:16.370 "state": "online", 00:36:16.370 "raid_level": "raid1", 00:36:16.370 "superblock": true, 00:36:16.370 "num_base_bdevs": 2, 00:36:16.370 "num_base_bdevs_discovered": 2, 00:36:16.370 "num_base_bdevs_operational": 2, 00:36:16.370 "base_bdevs_list": [ 00:36:16.370 { 00:36:16.370 "name": "BaseBdev1", 00:36:16.370 "uuid": "ade49dc2-f4d3-4b05-9162-baf634cab3f4", 00:36:16.370 "is_configured": true, 00:36:16.370 "data_offset": 256, 00:36:16.370 "data_size": 7936 00:36:16.370 }, 00:36:16.370 { 00:36:16.370 "name": "BaseBdev2", 00:36:16.370 "uuid": "5e495e9e-8c5b-4ae1-8b0c-2d073e2d551b", 00:36:16.370 "is_configured": true, 00:36:16.370 "data_offset": 256, 00:36:16.370 "data_size": 7936 00:36:16.370 } 00:36:16.370 ] 00:36:16.370 } 00:36:16.370 } 00:36:16.370 }' 00:36:16.370 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:16.370 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:36:16.370 BaseBdev2' 00:36:16.370 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:16.370 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:36:16.370 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:16.629 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:16.629 "name": "BaseBdev1", 00:36:16.629 "aliases": [ 00:36:16.629 "ade49dc2-f4d3-4b05-9162-baf634cab3f4" 00:36:16.629 ], 00:36:16.629 "product_name": "Malloc disk", 00:36:16.629 "block_size": 4096, 00:36:16.629 "num_blocks": 8192, 00:36:16.629 "uuid": "ade49dc2-f4d3-4b05-9162-baf634cab3f4", 00:36:16.629 "assigned_rate_limits": { 00:36:16.629 "rw_ios_per_sec": 0, 00:36:16.629 "rw_mbytes_per_sec": 0, 00:36:16.629 "r_mbytes_per_sec": 0, 00:36:16.629 "w_mbytes_per_sec": 0 00:36:16.629 }, 00:36:16.629 "claimed": true, 00:36:16.629 "claim_type": "exclusive_write", 00:36:16.629 "zoned": false, 00:36:16.629 "supported_io_types": { 00:36:16.629 "read": true, 00:36:16.629 "write": true, 00:36:16.629 "unmap": true, 00:36:16.629 "flush": true, 00:36:16.629 "reset": true, 00:36:16.629 "nvme_admin": false, 00:36:16.629 "nvme_io": false, 00:36:16.629 "nvme_io_md": false, 00:36:16.629 "write_zeroes": true, 00:36:16.629 "zcopy": true, 00:36:16.629 "get_zone_info": false, 00:36:16.629 "zone_management": false, 00:36:16.629 "zone_append": false, 00:36:16.629 "compare": false, 00:36:16.629 "compare_and_write": false, 00:36:16.629 "abort": true, 00:36:16.629 "seek_hole": false, 00:36:16.629 "seek_data": false, 00:36:16.629 "copy": true, 00:36:16.629 "nvme_iov_md": false 00:36:16.629 }, 00:36:16.629 "memory_domains": [ 00:36:16.629 { 00:36:16.629 "dma_device_id": "system", 00:36:16.629 "dma_device_type": 1 00:36:16.629 }, 00:36:16.629 { 00:36:16.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.629 "dma_device_type": 2 00:36:16.629 } 00:36:16.629 ], 00:36:16.629 "driver_specific": {} 00:36:16.629 }' 00:36:16.629 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:16.888 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:16.888 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:16.888 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:16.888 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:16.888 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:16.888 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:16.888 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:16.888 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:16.888 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:17.147 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:17.147 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:17.147 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:17.147 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:17.147 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:17.406 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:17.406 "name": "BaseBdev2", 00:36:17.406 "aliases": [ 00:36:17.406 "5e495e9e-8c5b-4ae1-8b0c-2d073e2d551b" 00:36:17.406 ], 00:36:17.406 "product_name": "Malloc disk", 00:36:17.406 "block_size": 4096, 00:36:17.406 "num_blocks": 8192, 00:36:17.406 "uuid": "5e495e9e-8c5b-4ae1-8b0c-2d073e2d551b", 00:36:17.406 "assigned_rate_limits": { 00:36:17.406 "rw_ios_per_sec": 0, 00:36:17.406 "rw_mbytes_per_sec": 0, 00:36:17.406 "r_mbytes_per_sec": 0, 00:36:17.406 "w_mbytes_per_sec": 0 00:36:17.406 }, 00:36:17.406 "claimed": true, 00:36:17.406 "claim_type": "exclusive_write", 00:36:17.406 "zoned": false, 00:36:17.406 "supported_io_types": { 00:36:17.406 "read": true, 00:36:17.406 "write": true, 00:36:17.406 "unmap": true, 00:36:17.406 "flush": true, 00:36:17.406 "reset": true, 00:36:17.406 "nvme_admin": false, 00:36:17.406 "nvme_io": false, 00:36:17.406 "nvme_io_md": false, 00:36:17.406 "write_zeroes": true, 00:36:17.406 "zcopy": true, 00:36:17.406 "get_zone_info": false, 00:36:17.406 "zone_management": false, 00:36:17.406 "zone_append": false, 00:36:17.406 "compare": false, 00:36:17.406 "compare_and_write": false, 00:36:17.406 "abort": true, 00:36:17.406 "seek_hole": false, 00:36:17.406 "seek_data": false, 00:36:17.406 "copy": true, 00:36:17.406 "nvme_iov_md": false 00:36:17.406 }, 00:36:17.406 "memory_domains": [ 00:36:17.406 { 00:36:17.406 "dma_device_id": "system", 00:36:17.406 "dma_device_type": 1 00:36:17.406 }, 00:36:17.406 { 00:36:17.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.406 "dma_device_type": 2 00:36:17.406 } 00:36:17.406 ], 00:36:17.406 "driver_specific": {} 00:36:17.406 }' 00:36:17.406 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:17.406 11:47:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:17.406 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:17.406 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:17.406 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:17.406 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:17.406 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:17.665 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:17.665 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:17.665 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:17.665 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:17.665 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:17.665 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:17.924 [2024-07-13 11:47:52.519852] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.924 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:18.183 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:18.183 "name": "Existed_Raid", 00:36:18.183 "uuid": "8c195d52-f66c-44f3-b3d6-2cbb137b2b77", 00:36:18.183 "strip_size_kb": 0, 00:36:18.183 "state": "online", 00:36:18.183 "raid_level": "raid1", 00:36:18.183 "superblock": true, 00:36:18.183 "num_base_bdevs": 2, 00:36:18.183 "num_base_bdevs_discovered": 1, 00:36:18.183 "num_base_bdevs_operational": 1, 00:36:18.183 "base_bdevs_list": [ 00:36:18.183 { 00:36:18.183 "name": null, 00:36:18.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.183 "is_configured": false, 00:36:18.183 "data_offset": 256, 00:36:18.183 "data_size": 7936 00:36:18.183 }, 00:36:18.183 { 00:36:18.183 "name": "BaseBdev2", 00:36:18.183 "uuid": "5e495e9e-8c5b-4ae1-8b0c-2d073e2d551b", 00:36:18.183 "is_configured": true, 00:36:18.183 "data_offset": 256, 00:36:18.183 "data_size": 7936 00:36:18.183 } 00:36:18.183 ] 00:36:18.183 }' 00:36:18.183 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:18.183 11:47:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:19.116 11:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:36:19.116 11:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:19.116 11:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:19.116 11:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:19.116 11:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:19.117 11:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:19.117 11:47:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:19.374 [2024-07-13 11:47:54.016246] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:19.374 [2024-07-13 11:47:54.016368] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:19.374 [2024-07-13 11:47:54.080913] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:19.374 [2024-07-13 11:47:54.080979] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:19.374 [2024-07-13 11:47:54.080991] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:36:19.374 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:19.374 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:19.374 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:19.374 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 160281 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 160281 ']' 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 160281 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160281 00:36:19.632 killing process with pid 160281 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160281' 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 160281 00:36:19.632 11:47:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 160281 00:36:19.632 [2024-07-13 11:47:54.363239] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:19.632 [2024-07-13 11:47:54.363366] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:21.009 ************************************ 00:36:21.009 END TEST raid_state_function_test_sb_4k 00:36:21.009 ************************************ 00:36:21.009 11:47:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:36:21.009 00:36:21.009 real 0m11.271s 00:36:21.009 user 0m20.118s 00:36:21.009 sys 0m1.258s 00:36:21.009 11:47:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:21.009 11:47:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.009 11:47:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:21.009 11:47:55 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:36:21.009 11:47:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:36:21.009 11:47:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:21.009 11:47:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:21.009 ************************************ 00:36:21.009 START TEST raid_superblock_test_4k 00:36:21.009 ************************************ 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=160662 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 160662 /var/tmp/spdk-raid.sock 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 160662 ']' 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:21.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.009 11:47:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:21.009 [2024-07-13 11:47:55.472935] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:21.009 [2024-07-13 11:47:55.473368] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160662 ] 00:36:21.009 [2024-07-13 11:47:55.646026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.268 [2024-07-13 11:47:55.863498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.527 [2024-07-13 11:47:56.050761] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:21.786 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:36:22.045 malloc1 00:36:22.045 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:22.304 [2024-07-13 11:47:56.805642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:22.304 [2024-07-13 11:47:56.805874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:22.304 [2024-07-13 11:47:56.805941] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:36:22.304 [2024-07-13 11:47:56.806211] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:22.304 [2024-07-13 11:47:56.808466] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:22.304 [2024-07-13 11:47:56.808630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:22.304 pt1 00:36:22.304 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:22.304 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:22.304 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:36:22.304 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:36:22.304 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:22.304 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:22.304 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:22.304 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:22.304 11:47:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:36:22.304 malloc2 00:36:22.304 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:22.563 [2024-07-13 11:47:57.228157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:22.563 [2024-07-13 11:47:57.228375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:22.563 [2024-07-13 11:47:57.228516] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:36:22.563 [2024-07-13 11:47:57.228628] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:22.563 [2024-07-13 11:47:57.230502] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:22.563 [2024-07-13 11:47:57.230660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:22.563 pt2 00:36:22.563 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:22.563 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:22.563 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:36:22.821 [2024-07-13 11:47:57.436211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:22.821 [2024-07-13 11:47:57.437885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:22.821 [2024-07-13 11:47:57.438214] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:36:22.821 [2024-07-13 11:47:57.438329] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:22.821 [2024-07-13 11:47:57.438488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:36:22.821 [2024-07-13 11:47:57.438919] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:36:22.822 [2024-07-13 11:47:57.439023] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:36:22.822 [2024-07-13 11:47:57.439258] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.822 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.080 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:23.080 "name": "raid_bdev1", 00:36:23.080 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:23.080 "strip_size_kb": 0, 00:36:23.080 "state": "online", 00:36:23.080 "raid_level": "raid1", 00:36:23.080 "superblock": true, 00:36:23.080 "num_base_bdevs": 2, 00:36:23.080 "num_base_bdevs_discovered": 2, 00:36:23.080 "num_base_bdevs_operational": 2, 00:36:23.080 "base_bdevs_list": [ 00:36:23.080 { 00:36:23.080 "name": "pt1", 00:36:23.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:23.080 "is_configured": true, 00:36:23.080 "data_offset": 256, 00:36:23.080 "data_size": 7936 00:36:23.080 }, 00:36:23.080 { 00:36:23.080 "name": "pt2", 00:36:23.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:23.080 "is_configured": true, 00:36:23.080 "data_offset": 256, 00:36:23.080 "data_size": 7936 00:36:23.080 } 00:36:23.080 ] 00:36:23.080 }' 00:36:23.080 11:47:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:23.080 11:47:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.647 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:36:23.647 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:23.647 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:23.647 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:23.647 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:23.647 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:36:23.647 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:23.647 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:23.906 [2024-07-13 11:47:58.576562] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:23.906 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:23.906 "name": "raid_bdev1", 00:36:23.906 "aliases": [ 00:36:23.906 "e807a08c-98c6-40a5-b0e7-73549832bf21" 00:36:23.906 ], 00:36:23.906 "product_name": "Raid Volume", 00:36:23.906 "block_size": 4096, 00:36:23.906 "num_blocks": 7936, 00:36:23.906 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:23.906 "assigned_rate_limits": { 00:36:23.906 "rw_ios_per_sec": 0, 00:36:23.906 "rw_mbytes_per_sec": 0, 00:36:23.906 "r_mbytes_per_sec": 0, 00:36:23.906 "w_mbytes_per_sec": 0 00:36:23.906 }, 00:36:23.906 "claimed": false, 00:36:23.906 "zoned": false, 00:36:23.906 "supported_io_types": { 00:36:23.906 "read": true, 00:36:23.906 "write": true, 00:36:23.906 "unmap": false, 00:36:23.906 "flush": false, 00:36:23.906 "reset": true, 00:36:23.906 "nvme_admin": false, 00:36:23.906 "nvme_io": false, 00:36:23.906 "nvme_io_md": false, 00:36:23.906 "write_zeroes": true, 00:36:23.906 "zcopy": false, 00:36:23.906 "get_zone_info": false, 00:36:23.906 "zone_management": false, 00:36:23.906 "zone_append": false, 00:36:23.906 "compare": false, 00:36:23.906 "compare_and_write": false, 00:36:23.906 "abort": false, 00:36:23.906 "seek_hole": false, 00:36:23.906 "seek_data": false, 00:36:23.906 "copy": false, 00:36:23.906 "nvme_iov_md": false 00:36:23.906 }, 00:36:23.906 "memory_domains": [ 00:36:23.906 { 00:36:23.906 "dma_device_id": "system", 00:36:23.906 "dma_device_type": 1 00:36:23.906 }, 00:36:23.906 { 00:36:23.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:23.906 "dma_device_type": 2 00:36:23.906 }, 00:36:23.906 { 00:36:23.906 "dma_device_id": "system", 00:36:23.906 "dma_device_type": 1 00:36:23.906 }, 00:36:23.906 { 00:36:23.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:23.906 "dma_device_type": 2 00:36:23.906 } 00:36:23.906 ], 00:36:23.906 "driver_specific": { 00:36:23.906 "raid": { 00:36:23.906 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:23.906 "strip_size_kb": 0, 00:36:23.906 "state": "online", 00:36:23.906 "raid_level": "raid1", 00:36:23.906 "superblock": true, 00:36:23.906 "num_base_bdevs": 2, 00:36:23.906 "num_base_bdevs_discovered": 2, 00:36:23.906 "num_base_bdevs_operational": 2, 00:36:23.906 "base_bdevs_list": [ 00:36:23.906 { 00:36:23.906 "name": "pt1", 00:36:23.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:23.906 "is_configured": true, 00:36:23.906 "data_offset": 256, 00:36:23.906 "data_size": 7936 00:36:23.906 }, 00:36:23.906 { 00:36:23.906 "name": "pt2", 00:36:23.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:23.906 "is_configured": true, 00:36:23.906 "data_offset": 256, 00:36:23.906 "data_size": 7936 00:36:23.906 } 00:36:23.906 ] 00:36:23.906 } 00:36:23.906 } 00:36:23.906 }' 00:36:23.906 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:23.906 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:23.906 pt2' 00:36:23.906 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:23.906 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:23.906 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:24.165 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:24.165 "name": "pt1", 00:36:24.165 "aliases": [ 00:36:24.165 "00000000-0000-0000-0000-000000000001" 00:36:24.165 ], 00:36:24.165 "product_name": "passthru", 00:36:24.165 "block_size": 4096, 00:36:24.165 "num_blocks": 8192, 00:36:24.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:24.165 "assigned_rate_limits": { 00:36:24.165 "rw_ios_per_sec": 0, 00:36:24.165 "rw_mbytes_per_sec": 0, 00:36:24.165 "r_mbytes_per_sec": 0, 00:36:24.165 "w_mbytes_per_sec": 0 00:36:24.165 }, 00:36:24.165 "claimed": true, 00:36:24.165 "claim_type": "exclusive_write", 00:36:24.165 "zoned": false, 00:36:24.165 "supported_io_types": { 00:36:24.165 "read": true, 00:36:24.165 "write": true, 00:36:24.165 "unmap": true, 00:36:24.165 "flush": true, 00:36:24.165 "reset": true, 00:36:24.165 "nvme_admin": false, 00:36:24.165 "nvme_io": false, 00:36:24.165 "nvme_io_md": false, 00:36:24.165 "write_zeroes": true, 00:36:24.165 "zcopy": true, 00:36:24.165 "get_zone_info": false, 00:36:24.165 "zone_management": false, 00:36:24.165 "zone_append": false, 00:36:24.165 "compare": false, 00:36:24.165 "compare_and_write": false, 00:36:24.165 "abort": true, 00:36:24.165 "seek_hole": false, 00:36:24.165 "seek_data": false, 00:36:24.165 "copy": true, 00:36:24.165 "nvme_iov_md": false 00:36:24.165 }, 00:36:24.165 "memory_domains": [ 00:36:24.165 { 00:36:24.165 "dma_device_id": "system", 00:36:24.165 "dma_device_type": 1 00:36:24.165 }, 00:36:24.165 { 00:36:24.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:24.165 "dma_device_type": 2 00:36:24.165 } 00:36:24.165 ], 00:36:24.165 "driver_specific": { 00:36:24.165 "passthru": { 00:36:24.165 "name": "pt1", 00:36:24.165 "base_bdev_name": "malloc1" 00:36:24.165 } 00:36:24.165 } 00:36:24.165 }' 00:36:24.165 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:24.424 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:24.424 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:24.424 11:47:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:24.424 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:24.424 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:24.424 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:24.424 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:24.424 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:24.683 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:24.683 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:24.683 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:24.683 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:24.683 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:24.683 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:24.942 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:24.942 "name": "pt2", 00:36:24.942 "aliases": [ 00:36:24.942 "00000000-0000-0000-0000-000000000002" 00:36:24.942 ], 00:36:24.942 "product_name": "passthru", 00:36:24.942 "block_size": 4096, 00:36:24.942 "num_blocks": 8192, 00:36:24.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:24.942 "assigned_rate_limits": { 00:36:24.942 "rw_ios_per_sec": 0, 00:36:24.942 "rw_mbytes_per_sec": 0, 00:36:24.942 "r_mbytes_per_sec": 0, 00:36:24.942 "w_mbytes_per_sec": 0 00:36:24.942 }, 00:36:24.942 "claimed": true, 00:36:24.942 "claim_type": "exclusive_write", 00:36:24.942 "zoned": false, 00:36:24.942 "supported_io_types": { 00:36:24.942 "read": true, 00:36:24.942 "write": true, 00:36:24.942 "unmap": true, 00:36:24.942 "flush": true, 00:36:24.942 "reset": true, 00:36:24.942 "nvme_admin": false, 00:36:24.942 "nvme_io": false, 00:36:24.942 "nvme_io_md": false, 00:36:24.942 "write_zeroes": true, 00:36:24.942 "zcopy": true, 00:36:24.942 "get_zone_info": false, 00:36:24.942 "zone_management": false, 00:36:24.942 "zone_append": false, 00:36:24.942 "compare": false, 00:36:24.942 "compare_and_write": false, 00:36:24.942 "abort": true, 00:36:24.942 "seek_hole": false, 00:36:24.942 "seek_data": false, 00:36:24.942 "copy": true, 00:36:24.942 "nvme_iov_md": false 00:36:24.942 }, 00:36:24.942 "memory_domains": [ 00:36:24.942 { 00:36:24.942 "dma_device_id": "system", 00:36:24.942 "dma_device_type": 1 00:36:24.942 }, 00:36:24.942 { 00:36:24.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:24.942 "dma_device_type": 2 00:36:24.942 } 00:36:24.942 ], 00:36:24.942 "driver_specific": { 00:36:24.942 "passthru": { 00:36:24.942 "name": "pt2", 00:36:24.942 "base_bdev_name": "malloc2" 00:36:24.942 } 00:36:24.942 } 00:36:24.942 }' 00:36:24.942 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:24.942 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:24.942 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:24.942 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:25.202 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:25.202 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:25.202 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:25.202 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:25.202 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:25.202 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:25.202 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:25.461 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:25.461 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:36:25.461 11:47:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:25.461 [2024-07-13 11:48:00.152799] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:25.461 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=e807a08c-98c6-40a5-b0e7-73549832bf21 00:36:25.461 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z e807a08c-98c6-40a5-b0e7-73549832bf21 ']' 00:36:25.461 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:25.720 [2024-07-13 11:48:00.404657] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:25.720 [2024-07-13 11:48:00.404794] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:25.720 [2024-07-13 11:48:00.404971] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:25.720 [2024-07-13 11:48:00.405108] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:25.720 [2024-07-13 11:48:00.405200] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:36:25.720 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:25.720 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:36:25.979 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:36:25.979 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:36:25.979 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:25.979 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:26.237 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:26.237 11:48:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:26.496 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:26.496 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:26.764 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:27.023 [2024-07-13 11:48:01.548788] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:27.023 [2024-07-13 11:48:01.550616] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:27.023 [2024-07-13 11:48:01.550685] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:27.023 [2024-07-13 11:48:01.550769] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:27.023 [2024-07-13 11:48:01.550806] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:27.023 [2024-07-13 11:48:01.550815] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:36:27.023 request: 00:36:27.023 { 00:36:27.023 "name": "raid_bdev1", 00:36:27.023 "raid_level": "raid1", 00:36:27.023 "base_bdevs": [ 00:36:27.023 "malloc1", 00:36:27.023 "malloc2" 00:36:27.023 ], 00:36:27.023 "superblock": false, 00:36:27.023 "method": "bdev_raid_create", 00:36:27.023 "req_id": 1 00:36:27.023 } 00:36:27.023 Got JSON-RPC error response 00:36:27.023 response: 00:36:27.023 { 00:36:27.023 "code": -17, 00:36:27.023 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:27.023 } 00:36:27.023 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:36:27.023 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:27.023 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:27.023 11:48:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:27.023 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.023 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:36:27.023 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:36:27.023 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:36:27.023 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:27.281 [2024-07-13 11:48:01.924820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:27.281 [2024-07-13 11:48:01.924878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:27.281 [2024-07-13 11:48:01.924911] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:36:27.281 [2024-07-13 11:48:01.924935] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:27.281 [2024-07-13 11:48:01.927182] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:27.281 [2024-07-13 11:48:01.927248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:27.281 [2024-07-13 11:48:01.927332] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:27.281 [2024-07-13 11:48:01.927381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:27.281 pt1 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.281 11:48:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.539 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:27.539 "name": "raid_bdev1", 00:36:27.539 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:27.539 "strip_size_kb": 0, 00:36:27.539 "state": "configuring", 00:36:27.539 "raid_level": "raid1", 00:36:27.539 "superblock": true, 00:36:27.539 "num_base_bdevs": 2, 00:36:27.539 "num_base_bdevs_discovered": 1, 00:36:27.539 "num_base_bdevs_operational": 2, 00:36:27.539 "base_bdevs_list": [ 00:36:27.539 { 00:36:27.539 "name": "pt1", 00:36:27.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:27.539 "is_configured": true, 00:36:27.539 "data_offset": 256, 00:36:27.539 "data_size": 7936 00:36:27.539 }, 00:36:27.539 { 00:36:27.539 "name": null, 00:36:27.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:27.539 "is_configured": false, 00:36:27.539 "data_offset": 256, 00:36:27.539 "data_size": 7936 00:36:27.539 } 00:36:27.539 ] 00:36:27.539 }' 00:36:27.539 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:27.539 11:48:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:28.108 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:36:28.108 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:36:28.108 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:36:28.108 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:28.365 [2024-07-13 11:48:02.904984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:28.365 [2024-07-13 11:48:02.905046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:28.365 [2024-07-13 11:48:02.905076] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:36:28.365 [2024-07-13 11:48:02.905100] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:28.365 [2024-07-13 11:48:02.905491] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:28.365 [2024-07-13 11:48:02.905546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:28.365 [2024-07-13 11:48:02.905625] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:28.365 [2024-07-13 11:48:02.905654] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:28.365 [2024-07-13 11:48:02.905765] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:36:28.365 [2024-07-13 11:48:02.905786] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:28.365 [2024-07-13 11:48:02.905883] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:36:28.365 [2024-07-13 11:48:02.906185] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:36:28.365 [2024-07-13 11:48:02.906207] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:36:28.365 [2024-07-13 11:48:02.906321] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:28.365 pt2 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:28.365 11:48:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:28.622 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:28.622 "name": "raid_bdev1", 00:36:28.622 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:28.622 "strip_size_kb": 0, 00:36:28.622 "state": "online", 00:36:28.622 "raid_level": "raid1", 00:36:28.622 "superblock": true, 00:36:28.622 "num_base_bdevs": 2, 00:36:28.622 "num_base_bdevs_discovered": 2, 00:36:28.622 "num_base_bdevs_operational": 2, 00:36:28.622 "base_bdevs_list": [ 00:36:28.622 { 00:36:28.622 "name": "pt1", 00:36:28.622 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:28.622 "is_configured": true, 00:36:28.622 "data_offset": 256, 00:36:28.622 "data_size": 7936 00:36:28.622 }, 00:36:28.622 { 00:36:28.622 "name": "pt2", 00:36:28.622 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:28.622 "is_configured": true, 00:36:28.622 "data_offset": 256, 00:36:28.622 "data_size": 7936 00:36:28.622 } 00:36:28.622 ] 00:36:28.622 }' 00:36:28.622 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:28.622 11:48:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:29.188 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:36:29.188 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:29.188 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:29.188 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:29.188 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:29.188 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:36:29.188 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:29.188 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:29.446 [2024-07-13 11:48:03.945514] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:29.446 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:29.446 "name": "raid_bdev1", 00:36:29.446 "aliases": [ 00:36:29.446 "e807a08c-98c6-40a5-b0e7-73549832bf21" 00:36:29.447 ], 00:36:29.447 "product_name": "Raid Volume", 00:36:29.447 "block_size": 4096, 00:36:29.447 "num_blocks": 7936, 00:36:29.447 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:29.447 "assigned_rate_limits": { 00:36:29.447 "rw_ios_per_sec": 0, 00:36:29.447 "rw_mbytes_per_sec": 0, 00:36:29.447 "r_mbytes_per_sec": 0, 00:36:29.447 "w_mbytes_per_sec": 0 00:36:29.447 }, 00:36:29.447 "claimed": false, 00:36:29.447 "zoned": false, 00:36:29.447 "supported_io_types": { 00:36:29.447 "read": true, 00:36:29.447 "write": true, 00:36:29.447 "unmap": false, 00:36:29.447 "flush": false, 00:36:29.447 "reset": true, 00:36:29.447 "nvme_admin": false, 00:36:29.447 "nvme_io": false, 00:36:29.447 "nvme_io_md": false, 00:36:29.447 "write_zeroes": true, 00:36:29.447 "zcopy": false, 00:36:29.447 "get_zone_info": false, 00:36:29.447 "zone_management": false, 00:36:29.447 "zone_append": false, 00:36:29.447 "compare": false, 00:36:29.447 "compare_and_write": false, 00:36:29.447 "abort": false, 00:36:29.447 "seek_hole": false, 00:36:29.447 "seek_data": false, 00:36:29.447 "copy": false, 00:36:29.447 "nvme_iov_md": false 00:36:29.447 }, 00:36:29.447 "memory_domains": [ 00:36:29.447 { 00:36:29.447 "dma_device_id": "system", 00:36:29.447 "dma_device_type": 1 00:36:29.447 }, 00:36:29.447 { 00:36:29.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:29.447 "dma_device_type": 2 00:36:29.447 }, 00:36:29.447 { 00:36:29.447 "dma_device_id": "system", 00:36:29.447 "dma_device_type": 1 00:36:29.447 }, 00:36:29.447 { 00:36:29.447 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:29.447 "dma_device_type": 2 00:36:29.447 } 00:36:29.447 ], 00:36:29.447 "driver_specific": { 00:36:29.447 "raid": { 00:36:29.447 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:29.447 "strip_size_kb": 0, 00:36:29.447 "state": "online", 00:36:29.447 "raid_level": "raid1", 00:36:29.447 "superblock": true, 00:36:29.447 "num_base_bdevs": 2, 00:36:29.447 "num_base_bdevs_discovered": 2, 00:36:29.447 "num_base_bdevs_operational": 2, 00:36:29.447 "base_bdevs_list": [ 00:36:29.447 { 00:36:29.447 "name": "pt1", 00:36:29.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:29.447 "is_configured": true, 00:36:29.447 "data_offset": 256, 00:36:29.447 "data_size": 7936 00:36:29.447 }, 00:36:29.447 { 00:36:29.447 "name": "pt2", 00:36:29.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:29.447 "is_configured": true, 00:36:29.447 "data_offset": 256, 00:36:29.447 "data_size": 7936 00:36:29.447 } 00:36:29.447 ] 00:36:29.447 } 00:36:29.447 } 00:36:29.447 }' 00:36:29.447 11:48:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:29.447 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:29.447 pt2' 00:36:29.447 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:29.447 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:29.447 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:29.706 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:29.706 "name": "pt1", 00:36:29.706 "aliases": [ 00:36:29.706 "00000000-0000-0000-0000-000000000001" 00:36:29.706 ], 00:36:29.706 "product_name": "passthru", 00:36:29.706 "block_size": 4096, 00:36:29.706 "num_blocks": 8192, 00:36:29.706 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:29.706 "assigned_rate_limits": { 00:36:29.706 "rw_ios_per_sec": 0, 00:36:29.706 "rw_mbytes_per_sec": 0, 00:36:29.706 "r_mbytes_per_sec": 0, 00:36:29.706 "w_mbytes_per_sec": 0 00:36:29.706 }, 00:36:29.706 "claimed": true, 00:36:29.706 "claim_type": "exclusive_write", 00:36:29.706 "zoned": false, 00:36:29.706 "supported_io_types": { 00:36:29.706 "read": true, 00:36:29.706 "write": true, 00:36:29.706 "unmap": true, 00:36:29.706 "flush": true, 00:36:29.706 "reset": true, 00:36:29.706 "nvme_admin": false, 00:36:29.706 "nvme_io": false, 00:36:29.706 "nvme_io_md": false, 00:36:29.706 "write_zeroes": true, 00:36:29.706 "zcopy": true, 00:36:29.706 "get_zone_info": false, 00:36:29.706 "zone_management": false, 00:36:29.706 "zone_append": false, 00:36:29.706 "compare": false, 00:36:29.706 "compare_and_write": false, 00:36:29.706 "abort": true, 00:36:29.706 "seek_hole": false, 00:36:29.706 "seek_data": false, 00:36:29.706 "copy": true, 00:36:29.706 "nvme_iov_md": false 00:36:29.706 }, 00:36:29.706 "memory_domains": [ 00:36:29.706 { 00:36:29.706 "dma_device_id": "system", 00:36:29.706 "dma_device_type": 1 00:36:29.706 }, 00:36:29.706 { 00:36:29.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:29.706 "dma_device_type": 2 00:36:29.706 } 00:36:29.706 ], 00:36:29.706 "driver_specific": { 00:36:29.706 "passthru": { 00:36:29.706 "name": "pt1", 00:36:29.706 "base_bdev_name": "malloc1" 00:36:29.706 } 00:36:29.706 } 00:36:29.706 }' 00:36:29.706 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:29.706 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:29.706 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:29.706 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:29.706 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:29.706 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:29.964 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:29.964 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:29.964 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:29.964 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:29.964 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:29.964 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:29.964 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:29.964 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:29.964 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:30.223 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:30.223 "name": "pt2", 00:36:30.223 "aliases": [ 00:36:30.223 "00000000-0000-0000-0000-000000000002" 00:36:30.223 ], 00:36:30.223 "product_name": "passthru", 00:36:30.223 "block_size": 4096, 00:36:30.223 "num_blocks": 8192, 00:36:30.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:30.223 "assigned_rate_limits": { 00:36:30.223 "rw_ios_per_sec": 0, 00:36:30.223 "rw_mbytes_per_sec": 0, 00:36:30.223 "r_mbytes_per_sec": 0, 00:36:30.223 "w_mbytes_per_sec": 0 00:36:30.223 }, 00:36:30.223 "claimed": true, 00:36:30.223 "claim_type": "exclusive_write", 00:36:30.223 "zoned": false, 00:36:30.223 "supported_io_types": { 00:36:30.223 "read": true, 00:36:30.223 "write": true, 00:36:30.223 "unmap": true, 00:36:30.223 "flush": true, 00:36:30.223 "reset": true, 00:36:30.223 "nvme_admin": false, 00:36:30.223 "nvme_io": false, 00:36:30.223 "nvme_io_md": false, 00:36:30.223 "write_zeroes": true, 00:36:30.223 "zcopy": true, 00:36:30.223 "get_zone_info": false, 00:36:30.223 "zone_management": false, 00:36:30.223 "zone_append": false, 00:36:30.223 "compare": false, 00:36:30.223 "compare_and_write": false, 00:36:30.223 "abort": true, 00:36:30.223 "seek_hole": false, 00:36:30.223 "seek_data": false, 00:36:30.223 "copy": true, 00:36:30.223 "nvme_iov_md": false 00:36:30.223 }, 00:36:30.223 "memory_domains": [ 00:36:30.223 { 00:36:30.223 "dma_device_id": "system", 00:36:30.223 "dma_device_type": 1 00:36:30.223 }, 00:36:30.223 { 00:36:30.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:30.223 "dma_device_type": 2 00:36:30.223 } 00:36:30.223 ], 00:36:30.223 "driver_specific": { 00:36:30.223 "passthru": { 00:36:30.223 "name": "pt2", 00:36:30.223 "base_bdev_name": "malloc2" 00:36:30.223 } 00:36:30.223 } 00:36:30.223 }' 00:36:30.223 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:30.223 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:30.223 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:30.223 11:48:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:30.481 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:30.481 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:36:30.481 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:30.481 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:30.481 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:36:30.481 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:30.481 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:30.481 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:36:30.739 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:30.739 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:36:30.739 [2024-07-13 11:48:05.481781] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:30.739 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' e807a08c-98c6-40a5-b0e7-73549832bf21 '!=' e807a08c-98c6-40a5-b0e7-73549832bf21 ']' 00:36:30.739 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:36:30.739 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:30.739 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:36:30.739 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:30.998 [2024-07-13 11:48:05.741678] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:31.256 "name": "raid_bdev1", 00:36:31.256 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:31.256 "strip_size_kb": 0, 00:36:31.256 "state": "online", 00:36:31.256 "raid_level": "raid1", 00:36:31.256 "superblock": true, 00:36:31.256 "num_base_bdevs": 2, 00:36:31.256 "num_base_bdevs_discovered": 1, 00:36:31.256 "num_base_bdevs_operational": 1, 00:36:31.256 "base_bdevs_list": [ 00:36:31.256 { 00:36:31.256 "name": null, 00:36:31.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:31.256 "is_configured": false, 00:36:31.256 "data_offset": 256, 00:36:31.256 "data_size": 7936 00:36:31.256 }, 00:36:31.256 { 00:36:31.256 "name": "pt2", 00:36:31.256 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:31.256 "is_configured": true, 00:36:31.256 "data_offset": 256, 00:36:31.256 "data_size": 7936 00:36:31.256 } 00:36:31.256 ] 00:36:31.256 }' 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:31.256 11:48:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:32.192 11:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:32.192 [2024-07-13 11:48:06.873898] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:32.192 [2024-07-13 11:48:06.873931] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:32.192 [2024-07-13 11:48:06.874008] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:32.192 [2024-07-13 11:48:06.874094] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:32.192 [2024-07-13 11:48:06.874114] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:36:32.192 11:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:32.192 11:48:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:36:32.450 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:36:32.450 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:36:32.450 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:36:32.450 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:36:32.451 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:32.708 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:36:32.708 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:36:32.708 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:36:32.708 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:36:32.708 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:36:32.708 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:32.967 [2024-07-13 11:48:07.521919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:32.967 [2024-07-13 11:48:07.522030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:32.967 [2024-07-13 11:48:07.522060] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:36:32.967 [2024-07-13 11:48:07.522086] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:32.967 [2024-07-13 11:48:07.524289] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:32.967 [2024-07-13 11:48:07.524368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:32.967 [2024-07-13 11:48:07.524483] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:32.967 [2024-07-13 11:48:07.524580] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:32.967 [2024-07-13 11:48:07.524699] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:36:32.967 [2024-07-13 11:48:07.524724] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:32.967 [2024-07-13 11:48:07.524817] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:32.967 [2024-07-13 11:48:07.525137] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:36:32.967 [2024-07-13 11:48:07.525181] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:36:32.967 [2024-07-13 11:48:07.525326] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:32.967 pt2 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:32.967 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:33.225 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:33.225 "name": "raid_bdev1", 00:36:33.225 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:33.225 "strip_size_kb": 0, 00:36:33.225 "state": "online", 00:36:33.225 "raid_level": "raid1", 00:36:33.225 "superblock": true, 00:36:33.225 "num_base_bdevs": 2, 00:36:33.225 "num_base_bdevs_discovered": 1, 00:36:33.225 "num_base_bdevs_operational": 1, 00:36:33.225 "base_bdevs_list": [ 00:36:33.225 { 00:36:33.225 "name": null, 00:36:33.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:33.225 "is_configured": false, 00:36:33.225 "data_offset": 256, 00:36:33.225 "data_size": 7936 00:36:33.225 }, 00:36:33.225 { 00:36:33.225 "name": "pt2", 00:36:33.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:33.225 "is_configured": true, 00:36:33.225 "data_offset": 256, 00:36:33.225 "data_size": 7936 00:36:33.225 } 00:36:33.225 ] 00:36:33.225 }' 00:36:33.225 11:48:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:33.225 11:48:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:33.791 11:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:34.057 [2024-07-13 11:48:08.771054] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:34.057 [2024-07-13 11:48:08.771084] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:34.057 [2024-07-13 11:48:08.771148] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:34.057 [2024-07-13 11:48:08.771196] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:34.057 [2024-07-13 11:48:08.771207] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:36:34.057 11:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.057 11:48:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:36:34.326 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:36:34.326 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:36:34.326 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:36:34.326 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:34.593 [2024-07-13 11:48:09.295131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:34.593 [2024-07-13 11:48:09.295195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:34.593 [2024-07-13 11:48:09.295263] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:34.593 [2024-07-13 11:48:09.295283] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:34.593 [2024-07-13 11:48:09.297514] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:34.593 [2024-07-13 11:48:09.297571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:34.593 [2024-07-13 11:48:09.297660] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:34.593 [2024-07-13 11:48:09.297707] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:34.593 [2024-07-13 11:48:09.297831] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:34.593 [2024-07-13 11:48:09.297845] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:34.593 [2024-07-13 11:48:09.297857] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:36:34.593 [2024-07-13 11:48:09.297911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:34.593 [2024-07-13 11:48:09.297983] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:36:34.593 [2024-07-13 11:48:09.297996] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:34.593 [2024-07-13 11:48:09.298111] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:34.593 [2024-07-13 11:48:09.298422] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:36:34.593 [2024-07-13 11:48:09.298447] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:36:34.593 [2024-07-13 11:48:09.298593] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:34.593 pt1 00:36:34.593 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:36:34.593 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:34.593 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:34.593 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:34.593 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:34.594 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:34.594 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:34.594 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:34.594 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:34.594 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:34.594 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:34.594 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.594 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.851 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:34.851 "name": "raid_bdev1", 00:36:34.851 "uuid": "e807a08c-98c6-40a5-b0e7-73549832bf21", 00:36:34.851 "strip_size_kb": 0, 00:36:34.851 "state": "online", 00:36:34.851 "raid_level": "raid1", 00:36:34.851 "superblock": true, 00:36:34.851 "num_base_bdevs": 2, 00:36:34.851 "num_base_bdevs_discovered": 1, 00:36:34.851 "num_base_bdevs_operational": 1, 00:36:34.851 "base_bdevs_list": [ 00:36:34.851 { 00:36:34.851 "name": null, 00:36:34.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:34.851 "is_configured": false, 00:36:34.851 "data_offset": 256, 00:36:34.851 "data_size": 7936 00:36:34.851 }, 00:36:34.851 { 00:36:34.851 "name": "pt2", 00:36:34.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:34.851 "is_configured": true, 00:36:34.851 "data_offset": 256, 00:36:34.851 "data_size": 7936 00:36:34.851 } 00:36:34.851 ] 00:36:34.851 }' 00:36:34.851 11:48:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:34.851 11:48:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:35.787 11:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:36:35.787 11:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:35.787 11:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:36:35.787 11:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:35.787 11:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:36:36.046 [2024-07-13 11:48:10.643642] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' e807a08c-98c6-40a5-b0e7-73549832bf21 '!=' e807a08c-98c6-40a5-b0e7-73549832bf21 ']' 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 160662 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 160662 ']' 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 160662 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 160662 00:36:36.046 killing process with pid 160662 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 160662' 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 160662 00:36:36.046 11:48:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 160662 00:36:36.046 [2024-07-13 11:48:10.681069] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:36.046 [2024-07-13 11:48:10.681151] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:36.046 [2024-07-13 11:48:10.681198] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:36.046 [2024-07-13 11:48:10.681213] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:36:36.304 [2024-07-13 11:48:10.808124] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:37.241 ************************************ 00:36:37.241 END TEST raid_superblock_test_4k 00:36:37.241 ************************************ 00:36:37.241 11:48:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:36:37.241 00:36:37.241 real 0m16.326s 00:36:37.241 user 0m30.049s 00:36:37.241 sys 0m1.958s 00:36:37.241 11:48:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:37.241 11:48:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:37.241 11:48:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:36:37.241 11:48:11 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:36:37.241 11:48:11 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:36:37.241 11:48:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:36:37.241 11:48:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:37.241 11:48:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:37.241 ************************************ 00:36:37.241 START TEST raid_rebuild_test_sb_4k 00:36:37.241 ************************************ 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=161219 00:36:37.241 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 161219 /var/tmp/spdk-raid.sock 00:36:37.242 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:37.242 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 161219 ']' 00:36:37.242 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:37.242 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:37.242 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:37.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:37.242 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:37.242 11:48:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:37.242 [2024-07-13 11:48:11.859760] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:36:37.242 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:37.242 Zero copy mechanism will not be used. 00:36:37.242 [2024-07-13 11:48:11.859956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161219 ] 00:36:37.501 [2024-07-13 11:48:12.022465] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.501 [2024-07-13 11:48:12.208113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.760 [2024-07-13 11:48:12.397633] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:38.326 11:48:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:38.326 11:48:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:36:38.326 11:48:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:38.326 11:48:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:36:38.326 BaseBdev1_malloc 00:36:38.583 11:48:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:38.583 [2024-07-13 11:48:13.302431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:38.583 [2024-07-13 11:48:13.302531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:38.583 [2024-07-13 11:48:13.302568] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:36:38.583 [2024-07-13 11:48:13.302588] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:38.583 [2024-07-13 11:48:13.304928] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:38.583 [2024-07-13 11:48:13.304975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:38.583 BaseBdev1 00:36:38.583 11:48:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:38.583 11:48:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:36:38.840 BaseBdev2_malloc 00:36:38.840 11:48:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:39.098 [2024-07-13 11:48:13.784746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:39.098 [2024-07-13 11:48:13.784841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.098 [2024-07-13 11:48:13.784879] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:36:39.098 [2024-07-13 11:48:13.784899] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.098 [2024-07-13 11:48:13.787227] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.098 [2024-07-13 11:48:13.787275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:39.098 BaseBdev2 00:36:39.098 11:48:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:36:39.356 spare_malloc 00:36:39.356 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:39.615 spare_delay 00:36:39.615 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:39.872 [2024-07-13 11:48:14.398100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:39.872 [2024-07-13 11:48:14.398185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.872 [2024-07-13 11:48:14.398219] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:36:39.872 [2024-07-13 11:48:14.398244] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.872 [2024-07-13 11:48:14.400420] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.872 [2024-07-13 11:48:14.400476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:39.872 spare 00:36:39.872 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:36:40.130 [2024-07-13 11:48:14.638206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:40.130 [2024-07-13 11:48:14.640226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:40.130 [2024-07-13 11:48:14.640456] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:36:40.130 [2024-07-13 11:48:14.640495] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:40.130 [2024-07-13 11:48:14.640630] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:36:40.130 [2024-07-13 11:48:14.641130] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:36:40.130 [2024-07-13 11:48:14.641155] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:36:40.130 [2024-07-13 11:48:14.641351] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:40.130 "name": "raid_bdev1", 00:36:40.130 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:40.130 "strip_size_kb": 0, 00:36:40.130 "state": "online", 00:36:40.130 "raid_level": "raid1", 00:36:40.130 "superblock": true, 00:36:40.130 "num_base_bdevs": 2, 00:36:40.130 "num_base_bdevs_discovered": 2, 00:36:40.130 "num_base_bdevs_operational": 2, 00:36:40.130 "base_bdevs_list": [ 00:36:40.130 { 00:36:40.130 "name": "BaseBdev1", 00:36:40.130 "uuid": "afe1590e-b528-5f5b-9537-134d726a33ff", 00:36:40.130 "is_configured": true, 00:36:40.130 "data_offset": 256, 00:36:40.130 "data_size": 7936 00:36:40.130 }, 00:36:40.130 { 00:36:40.130 "name": "BaseBdev2", 00:36:40.130 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:40.130 "is_configured": true, 00:36:40.130 "data_offset": 256, 00:36:40.130 "data_size": 7936 00:36:40.130 } 00:36:40.130 ] 00:36:40.130 }' 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:40.130 11:48:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:40.697 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:40.697 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:36:40.955 [2024-07-13 11:48:15.694544] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:40.955 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:36:40.955 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.955 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:41.213 11:48:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:41.472 [2024-07-13 11:48:16.206447] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:36:41.472 /dev/nbd0 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:41.731 1+0 records in 00:36:41.731 1+0 records out 00:36:41.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187469 s, 21.8 MB/s 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:36:41.731 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:36:42.297 7936+0 records in 00:36:42.297 7936+0 records out 00:36:42.297 32505856 bytes (33 MB, 31 MiB) copied, 0.730118 s, 44.5 MB/s 00:36:42.297 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:42.298 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:42.298 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:36:42.298 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:42.298 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:36:42.298 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:42.298 11:48:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:42.556 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:42.556 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:42.556 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:42.556 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:42.556 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:42.556 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:42.556 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:36:42.556 [2024-07-13 11:48:17.213369] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:42.815 [2024-07-13 11:48:17.557024] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:42.815 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:43.074 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.074 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:43.074 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:43.074 "name": "raid_bdev1", 00:36:43.074 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:43.074 "strip_size_kb": 0, 00:36:43.074 "state": "online", 00:36:43.074 "raid_level": "raid1", 00:36:43.074 "superblock": true, 00:36:43.074 "num_base_bdevs": 2, 00:36:43.074 "num_base_bdevs_discovered": 1, 00:36:43.074 "num_base_bdevs_operational": 1, 00:36:43.074 "base_bdevs_list": [ 00:36:43.074 { 00:36:43.074 "name": null, 00:36:43.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:43.074 "is_configured": false, 00:36:43.074 "data_offset": 256, 00:36:43.074 "data_size": 7936 00:36:43.074 }, 00:36:43.074 { 00:36:43.074 "name": "BaseBdev2", 00:36:43.074 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:43.074 "is_configured": true, 00:36:43.074 "data_offset": 256, 00:36:43.074 "data_size": 7936 00:36:43.074 } 00:36:43.074 ] 00:36:43.074 }' 00:36:43.074 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:43.074 11:48:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:44.010 11:48:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:44.011 [2024-07-13 11:48:18.681281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:44.011 [2024-07-13 11:48:18.694210] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ffd0 00:36:44.011 [2024-07-13 11:48:18.696111] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:44.011 11:48:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:45.387 "name": "raid_bdev1", 00:36:45.387 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:45.387 "strip_size_kb": 0, 00:36:45.387 "state": "online", 00:36:45.387 "raid_level": "raid1", 00:36:45.387 "superblock": true, 00:36:45.387 "num_base_bdevs": 2, 00:36:45.387 "num_base_bdevs_discovered": 2, 00:36:45.387 "num_base_bdevs_operational": 2, 00:36:45.387 "process": { 00:36:45.387 "type": "rebuild", 00:36:45.387 "target": "spare", 00:36:45.387 "progress": { 00:36:45.387 "blocks": 3072, 00:36:45.387 "percent": 38 00:36:45.387 } 00:36:45.387 }, 00:36:45.387 "base_bdevs_list": [ 00:36:45.387 { 00:36:45.387 "name": "spare", 00:36:45.387 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:45.387 "is_configured": true, 00:36:45.387 "data_offset": 256, 00:36:45.387 "data_size": 7936 00:36:45.387 }, 00:36:45.387 { 00:36:45.387 "name": "BaseBdev2", 00:36:45.387 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:45.387 "is_configured": true, 00:36:45.387 "data_offset": 256, 00:36:45.387 "data_size": 7936 00:36:45.387 } 00:36:45.387 ] 00:36:45.387 }' 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:45.387 11:48:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:45.387 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:45.387 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:45.646 [2024-07-13 11:48:20.273840] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:45.646 [2024-07-13 11:48:20.306365] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:45.646 [2024-07-13 11:48:20.306439] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:45.646 [2024-07-13 11:48:20.306457] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:45.646 [2024-07-13 11:48:20.306465] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:45.646 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:45.904 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:45.904 "name": "raid_bdev1", 00:36:45.904 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:45.904 "strip_size_kb": 0, 00:36:45.904 "state": "online", 00:36:45.904 "raid_level": "raid1", 00:36:45.904 "superblock": true, 00:36:45.904 "num_base_bdevs": 2, 00:36:45.904 "num_base_bdevs_discovered": 1, 00:36:45.904 "num_base_bdevs_operational": 1, 00:36:45.904 "base_bdevs_list": [ 00:36:45.904 { 00:36:45.904 "name": null, 00:36:45.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:45.904 "is_configured": false, 00:36:45.904 "data_offset": 256, 00:36:45.904 "data_size": 7936 00:36:45.904 }, 00:36:45.904 { 00:36:45.904 "name": "BaseBdev2", 00:36:45.904 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:45.904 "is_configured": true, 00:36:45.904 "data_offset": 256, 00:36:45.904 "data_size": 7936 00:36:45.904 } 00:36:45.904 ] 00:36:45.904 }' 00:36:45.904 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:45.904 11:48:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:46.470 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:46.470 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:46.470 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:46.470 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:46.470 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:46.470 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:46.470 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:46.729 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:46.729 "name": "raid_bdev1", 00:36:46.729 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:46.729 "strip_size_kb": 0, 00:36:46.729 "state": "online", 00:36:46.729 "raid_level": "raid1", 00:36:46.729 "superblock": true, 00:36:46.729 "num_base_bdevs": 2, 00:36:46.729 "num_base_bdevs_discovered": 1, 00:36:46.729 "num_base_bdevs_operational": 1, 00:36:46.729 "base_bdevs_list": [ 00:36:46.729 { 00:36:46.729 "name": null, 00:36:46.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.729 "is_configured": false, 00:36:46.729 "data_offset": 256, 00:36:46.729 "data_size": 7936 00:36:46.729 }, 00:36:46.729 { 00:36:46.729 "name": "BaseBdev2", 00:36:46.729 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:46.729 "is_configured": true, 00:36:46.729 "data_offset": 256, 00:36:46.729 "data_size": 7936 00:36:46.729 } 00:36:46.729 ] 00:36:46.729 }' 00:36:46.729 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:46.987 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:46.987 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:46.987 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:46.987 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:47.246 [2024-07-13 11:48:21.772451] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:47.246 [2024-07-13 11:48:21.784154] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:36:47.246 [2024-07-13 11:48:21.785969] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:47.246 11:48:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:48.181 11:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:48.181 11:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:48.181 11:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:48.181 11:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:48.181 11:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:48.181 11:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:48.181 11:48:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:48.439 "name": "raid_bdev1", 00:36:48.439 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:48.439 "strip_size_kb": 0, 00:36:48.439 "state": "online", 00:36:48.439 "raid_level": "raid1", 00:36:48.439 "superblock": true, 00:36:48.439 "num_base_bdevs": 2, 00:36:48.439 "num_base_bdevs_discovered": 2, 00:36:48.439 "num_base_bdevs_operational": 2, 00:36:48.439 "process": { 00:36:48.439 "type": "rebuild", 00:36:48.439 "target": "spare", 00:36:48.439 "progress": { 00:36:48.439 "blocks": 3072, 00:36:48.439 "percent": 38 00:36:48.439 } 00:36:48.439 }, 00:36:48.439 "base_bdevs_list": [ 00:36:48.439 { 00:36:48.439 "name": "spare", 00:36:48.439 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:48.439 "is_configured": true, 00:36:48.439 "data_offset": 256, 00:36:48.439 "data_size": 7936 00:36:48.439 }, 00:36:48.439 { 00:36:48.439 "name": "BaseBdev2", 00:36:48.439 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:48.439 "is_configured": true, 00:36:48.439 "data_offset": 256, 00:36:48.439 "data_size": 7936 00:36:48.439 } 00:36:48.439 ] 00:36:48.439 }' 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:36:48.439 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1354 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:48.439 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:48.697 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:48.697 "name": "raid_bdev1", 00:36:48.697 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:48.697 "strip_size_kb": 0, 00:36:48.697 "state": "online", 00:36:48.697 "raid_level": "raid1", 00:36:48.697 "superblock": true, 00:36:48.697 "num_base_bdevs": 2, 00:36:48.697 "num_base_bdevs_discovered": 2, 00:36:48.697 "num_base_bdevs_operational": 2, 00:36:48.697 "process": { 00:36:48.697 "type": "rebuild", 00:36:48.697 "target": "spare", 00:36:48.697 "progress": { 00:36:48.697 "blocks": 3840, 00:36:48.697 "percent": 48 00:36:48.697 } 00:36:48.697 }, 00:36:48.697 "base_bdevs_list": [ 00:36:48.697 { 00:36:48.697 "name": "spare", 00:36:48.697 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:48.697 "is_configured": true, 00:36:48.697 "data_offset": 256, 00:36:48.697 "data_size": 7936 00:36:48.697 }, 00:36:48.697 { 00:36:48.697 "name": "BaseBdev2", 00:36:48.697 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:48.697 "is_configured": true, 00:36:48.697 "data_offset": 256, 00:36:48.697 "data_size": 7936 00:36:48.697 } 00:36:48.697 ] 00:36:48.697 }' 00:36:48.697 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:48.697 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:48.697 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:48.955 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:48.955 11:48:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:49.890 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:49.890 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:49.890 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:49.890 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:49.890 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:49.890 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:49.890 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:49.890 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:50.148 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:50.148 "name": "raid_bdev1", 00:36:50.148 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:50.148 "strip_size_kb": 0, 00:36:50.148 "state": "online", 00:36:50.148 "raid_level": "raid1", 00:36:50.148 "superblock": true, 00:36:50.148 "num_base_bdevs": 2, 00:36:50.148 "num_base_bdevs_discovered": 2, 00:36:50.148 "num_base_bdevs_operational": 2, 00:36:50.148 "process": { 00:36:50.148 "type": "rebuild", 00:36:50.148 "target": "spare", 00:36:50.148 "progress": { 00:36:50.148 "blocks": 7424, 00:36:50.148 "percent": 93 00:36:50.148 } 00:36:50.148 }, 00:36:50.148 "base_bdevs_list": [ 00:36:50.148 { 00:36:50.148 "name": "spare", 00:36:50.148 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:50.148 "is_configured": true, 00:36:50.148 "data_offset": 256, 00:36:50.148 "data_size": 7936 00:36:50.148 }, 00:36:50.148 { 00:36:50.148 "name": "BaseBdev2", 00:36:50.148 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:50.148 "is_configured": true, 00:36:50.148 "data_offset": 256, 00:36:50.148 "data_size": 7936 00:36:50.148 } 00:36:50.148 ] 00:36:50.148 }' 00:36:50.148 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:50.148 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:50.148 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:50.148 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:50.148 11:48:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:50.407 [2024-07-13 11:48:24.905125] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:50.407 [2024-07-13 11:48:24.905274] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:50.407 [2024-07-13 11:48:24.905419] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:51.343 11:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:51.343 11:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:51.343 11:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:51.343 11:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:51.343 11:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:51.343 11:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:51.343 11:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.343 11:48:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:51.602 "name": "raid_bdev1", 00:36:51.602 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:51.602 "strip_size_kb": 0, 00:36:51.602 "state": "online", 00:36:51.602 "raid_level": "raid1", 00:36:51.602 "superblock": true, 00:36:51.602 "num_base_bdevs": 2, 00:36:51.602 "num_base_bdevs_discovered": 2, 00:36:51.602 "num_base_bdevs_operational": 2, 00:36:51.602 "base_bdevs_list": [ 00:36:51.602 { 00:36:51.602 "name": "spare", 00:36:51.602 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:51.602 "is_configured": true, 00:36:51.602 "data_offset": 256, 00:36:51.602 "data_size": 7936 00:36:51.602 }, 00:36:51.602 { 00:36:51.602 "name": "BaseBdev2", 00:36:51.602 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:51.602 "is_configured": true, 00:36:51.602 "data_offset": 256, 00:36:51.602 "data_size": 7936 00:36:51.602 } 00:36:51.602 ] 00:36:51.602 }' 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.602 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:51.861 "name": "raid_bdev1", 00:36:51.861 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:51.861 "strip_size_kb": 0, 00:36:51.861 "state": "online", 00:36:51.861 "raid_level": "raid1", 00:36:51.861 "superblock": true, 00:36:51.861 "num_base_bdevs": 2, 00:36:51.861 "num_base_bdevs_discovered": 2, 00:36:51.861 "num_base_bdevs_operational": 2, 00:36:51.861 "base_bdevs_list": [ 00:36:51.861 { 00:36:51.861 "name": "spare", 00:36:51.861 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:51.861 "is_configured": true, 00:36:51.861 "data_offset": 256, 00:36:51.861 "data_size": 7936 00:36:51.861 }, 00:36:51.861 { 00:36:51.861 "name": "BaseBdev2", 00:36:51.861 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:51.861 "is_configured": true, 00:36:51.861 "data_offset": 256, 00:36:51.861 "data_size": 7936 00:36:51.861 } 00:36:51.861 ] 00:36:51.861 }' 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:51.861 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.120 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:52.120 "name": "raid_bdev1", 00:36:52.120 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:52.120 "strip_size_kb": 0, 00:36:52.120 "state": "online", 00:36:52.120 "raid_level": "raid1", 00:36:52.120 "superblock": true, 00:36:52.120 "num_base_bdevs": 2, 00:36:52.120 "num_base_bdevs_discovered": 2, 00:36:52.120 "num_base_bdevs_operational": 2, 00:36:52.120 "base_bdevs_list": [ 00:36:52.120 { 00:36:52.120 "name": "spare", 00:36:52.120 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:52.120 "is_configured": true, 00:36:52.120 "data_offset": 256, 00:36:52.120 "data_size": 7936 00:36:52.120 }, 00:36:52.120 { 00:36:52.120 "name": "BaseBdev2", 00:36:52.120 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:52.120 "is_configured": true, 00:36:52.120 "data_offset": 256, 00:36:52.120 "data_size": 7936 00:36:52.120 } 00:36:52.120 ] 00:36:52.120 }' 00:36:52.120 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:52.120 11:48:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:52.688 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:52.946 [2024-07-13 11:48:27.573862] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:52.946 [2024-07-13 11:48:27.573892] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:52.946 [2024-07-13 11:48:27.573982] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:52.946 [2024-07-13 11:48:27.574060] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:52.946 [2024-07-13 11:48:27.574070] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:36:52.946 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:52.946 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:53.204 11:48:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:53.461 /dev/nbd0 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:53.461 1+0 records in 00:36:53.461 1+0 records out 00:36:53.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052062 s, 7.9 MB/s 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:53.461 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:53.462 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:36:53.462 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:53.462 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:53.462 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:53.720 /dev/nbd1 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:53.720 1+0 records in 00:36:53.720 1+0 records out 00:36:53.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00058518 s, 7.0 MB/s 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:53.720 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:53.979 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:53.979 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:53.979 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:36:53.979 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:53.979 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:36:53.979 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:53.979 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:54.237 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:54.237 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:54.238 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:54.238 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:54.238 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:54.238 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:54.238 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:36:54.496 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:36:54.496 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:54.496 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:54.496 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:36:54.496 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:36:54.496 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:54.496 11:48:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:36:54.755 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:55.013 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:55.271 [2024-07-13 11:48:29.815781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:55.271 [2024-07-13 11:48:29.815851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:55.271 [2024-07-13 11:48:29.815915] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:55.271 [2024-07-13 11:48:29.815951] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:55.271 [2024-07-13 11:48:29.818478] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:55.271 [2024-07-13 11:48:29.818527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:55.271 [2024-07-13 11:48:29.818636] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:55.271 [2024-07-13 11:48:29.818758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:55.272 [2024-07-13 11:48:29.818989] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:55.272 spare 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:55.272 11:48:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:55.272 [2024-07-13 11:48:29.919105] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:36:55.272 [2024-07-13 11:48:29.919126] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:55.272 [2024-07-13 11:48:29.919328] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:36:55.272 [2024-07-13 11:48:29.919697] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:36:55.272 [2024-07-13 11:48:29.919711] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:36:55.272 [2024-07-13 11:48:29.919855] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:55.530 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:55.530 "name": "raid_bdev1", 00:36:55.530 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:55.530 "strip_size_kb": 0, 00:36:55.530 "state": "online", 00:36:55.530 "raid_level": "raid1", 00:36:55.530 "superblock": true, 00:36:55.530 "num_base_bdevs": 2, 00:36:55.530 "num_base_bdevs_discovered": 2, 00:36:55.530 "num_base_bdevs_operational": 2, 00:36:55.530 "base_bdevs_list": [ 00:36:55.530 { 00:36:55.530 "name": "spare", 00:36:55.530 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:55.530 "is_configured": true, 00:36:55.530 "data_offset": 256, 00:36:55.530 "data_size": 7936 00:36:55.530 }, 00:36:55.530 { 00:36:55.530 "name": "BaseBdev2", 00:36:55.530 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:55.530 "is_configured": true, 00:36:55.530 "data_offset": 256, 00:36:55.530 "data_size": 7936 00:36:55.530 } 00:36:55.530 ] 00:36:55.530 }' 00:36:55.530 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:55.530 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:56.096 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:56.096 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:56.096 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:56.096 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:56.096 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:56.096 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.096 11:48:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:56.354 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:56.354 "name": "raid_bdev1", 00:36:56.354 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:56.354 "strip_size_kb": 0, 00:36:56.354 "state": "online", 00:36:56.354 "raid_level": "raid1", 00:36:56.354 "superblock": true, 00:36:56.354 "num_base_bdevs": 2, 00:36:56.354 "num_base_bdevs_discovered": 2, 00:36:56.354 "num_base_bdevs_operational": 2, 00:36:56.354 "base_bdevs_list": [ 00:36:56.354 { 00:36:56.354 "name": "spare", 00:36:56.354 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:56.354 "is_configured": true, 00:36:56.354 "data_offset": 256, 00:36:56.354 "data_size": 7936 00:36:56.354 }, 00:36:56.354 { 00:36:56.354 "name": "BaseBdev2", 00:36:56.354 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:56.354 "is_configured": true, 00:36:56.354 "data_offset": 256, 00:36:56.354 "data_size": 7936 00:36:56.354 } 00:36:56.354 ] 00:36:56.354 }' 00:36:56.354 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:56.354 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:56.354 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:56.612 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:56.612 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.612 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:56.869 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:36:56.869 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:56.869 [2024-07-13 11:48:31.585063] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:56.869 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:56.869 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:56.869 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:56.869 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:56.869 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:56.870 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:56.870 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:56.870 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:56.870 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:56.870 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:56.870 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.870 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:57.128 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:57.128 "name": "raid_bdev1", 00:36:57.128 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:57.128 "strip_size_kb": 0, 00:36:57.128 "state": "online", 00:36:57.128 "raid_level": "raid1", 00:36:57.128 "superblock": true, 00:36:57.128 "num_base_bdevs": 2, 00:36:57.128 "num_base_bdevs_discovered": 1, 00:36:57.128 "num_base_bdevs_operational": 1, 00:36:57.128 "base_bdevs_list": [ 00:36:57.128 { 00:36:57.128 "name": null, 00:36:57.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:57.128 "is_configured": false, 00:36:57.128 "data_offset": 256, 00:36:57.128 "data_size": 7936 00:36:57.128 }, 00:36:57.128 { 00:36:57.128 "name": "BaseBdev2", 00:36:57.128 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:57.128 "is_configured": true, 00:36:57.128 "data_offset": 256, 00:36:57.128 "data_size": 7936 00:36:57.128 } 00:36:57.128 ] 00:36:57.128 }' 00:36:57.128 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:57.128 11:48:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:58.062 11:48:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:58.062 [2024-07-13 11:48:32.701307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:58.062 [2024-07-13 11:48:32.701518] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:58.062 [2024-07-13 11:48:32.701540] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:58.063 [2024-07-13 11:48:32.701608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:58.063 [2024-07-13 11:48:32.714780] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:36:58.063 [2024-07-13 11:48:32.716827] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:58.063 11:48:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:36:58.996 11:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:58.996 11:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:58.996 11:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:58.996 11:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:58.996 11:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:58.996 11:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:58.996 11:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:59.254 11:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:59.254 "name": "raid_bdev1", 00:36:59.254 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:36:59.254 "strip_size_kb": 0, 00:36:59.254 "state": "online", 00:36:59.254 "raid_level": "raid1", 00:36:59.254 "superblock": true, 00:36:59.254 "num_base_bdevs": 2, 00:36:59.254 "num_base_bdevs_discovered": 2, 00:36:59.254 "num_base_bdevs_operational": 2, 00:36:59.254 "process": { 00:36:59.254 "type": "rebuild", 00:36:59.254 "target": "spare", 00:36:59.254 "progress": { 00:36:59.254 "blocks": 3072, 00:36:59.254 "percent": 38 00:36:59.254 } 00:36:59.254 }, 00:36:59.254 "base_bdevs_list": [ 00:36:59.254 { 00:36:59.254 "name": "spare", 00:36:59.254 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:36:59.254 "is_configured": true, 00:36:59.254 "data_offset": 256, 00:36:59.254 "data_size": 7936 00:36:59.254 }, 00:36:59.254 { 00:36:59.254 "name": "BaseBdev2", 00:36:59.254 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:36:59.254 "is_configured": true, 00:36:59.254 "data_offset": 256, 00:36:59.254 "data_size": 7936 00:36:59.254 } 00:36:59.254 ] 00:36:59.254 }' 00:36:59.254 11:48:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:59.512 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:59.512 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:59.512 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:59.512 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:59.770 [2024-07-13 11:48:34.338580] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:59.770 [2024-07-13 11:48:34.427664] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:59.770 [2024-07-13 11:48:34.427746] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:59.770 [2024-07-13 11:48:34.427766] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:59.770 [2024-07-13 11:48:34.427774] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:59.770 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:00.029 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:00.029 "name": "raid_bdev1", 00:37:00.029 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:37:00.029 "strip_size_kb": 0, 00:37:00.029 "state": "online", 00:37:00.029 "raid_level": "raid1", 00:37:00.029 "superblock": true, 00:37:00.029 "num_base_bdevs": 2, 00:37:00.029 "num_base_bdevs_discovered": 1, 00:37:00.029 "num_base_bdevs_operational": 1, 00:37:00.029 "base_bdevs_list": [ 00:37:00.029 { 00:37:00.029 "name": null, 00:37:00.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.029 "is_configured": false, 00:37:00.029 "data_offset": 256, 00:37:00.029 "data_size": 7936 00:37:00.029 }, 00:37:00.029 { 00:37:00.029 "name": "BaseBdev2", 00:37:00.029 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:37:00.029 "is_configured": true, 00:37:00.029 "data_offset": 256, 00:37:00.029 "data_size": 7936 00:37:00.029 } 00:37:00.029 ] 00:37:00.029 }' 00:37:00.029 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:00.029 11:48:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:00.963 11:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:00.963 [2024-07-13 11:48:35.588252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:00.963 [2024-07-13 11:48:35.588334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:00.963 [2024-07-13 11:48:35.588376] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:00.963 [2024-07-13 11:48:35.588411] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:00.963 [2024-07-13 11:48:35.589106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:00.963 [2024-07-13 11:48:35.589183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:00.963 [2024-07-13 11:48:35.589290] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:00.963 [2024-07-13 11:48:35.589305] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:00.963 [2024-07-13 11:48:35.589312] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:00.963 [2024-07-13 11:48:35.589374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:00.963 [2024-07-13 11:48:35.600152] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:37:00.963 spare 00:37:00.963 [2024-07-13 11:48:35.602238] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:00.963 11:48:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:37:01.932 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:01.932 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:01.932 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:01.932 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:01.932 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:01.932 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.932 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.199 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:02.199 "name": "raid_bdev1", 00:37:02.199 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:37:02.199 "strip_size_kb": 0, 00:37:02.199 "state": "online", 00:37:02.199 "raid_level": "raid1", 00:37:02.199 "superblock": true, 00:37:02.199 "num_base_bdevs": 2, 00:37:02.199 "num_base_bdevs_discovered": 2, 00:37:02.199 "num_base_bdevs_operational": 2, 00:37:02.199 "process": { 00:37:02.199 "type": "rebuild", 00:37:02.199 "target": "spare", 00:37:02.199 "progress": { 00:37:02.199 "blocks": 2816, 00:37:02.199 "percent": 35 00:37:02.199 } 00:37:02.199 }, 00:37:02.199 "base_bdevs_list": [ 00:37:02.199 { 00:37:02.199 "name": "spare", 00:37:02.199 "uuid": "52c504ae-3f69-5928-bf62-b1b5396c2970", 00:37:02.199 "is_configured": true, 00:37:02.199 "data_offset": 256, 00:37:02.199 "data_size": 7936 00:37:02.199 }, 00:37:02.199 { 00:37:02.199 "name": "BaseBdev2", 00:37:02.199 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:37:02.199 "is_configured": true, 00:37:02.199 "data_offset": 256, 00:37:02.199 "data_size": 7936 00:37:02.199 } 00:37:02.199 ] 00:37:02.199 }' 00:37:02.199 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:02.199 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:02.199 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:02.199 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:02.199 11:48:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:02.457 [2024-07-13 11:48:37.176323] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:02.457 [2024-07-13 11:48:37.212229] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:02.457 [2024-07-13 11:48:37.212327] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:02.457 [2024-07-13 11:48:37.212348] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:02.714 [2024-07-13 11:48:37.212357] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.714 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.972 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:02.972 "name": "raid_bdev1", 00:37:02.972 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:37:02.972 "strip_size_kb": 0, 00:37:02.972 "state": "online", 00:37:02.972 "raid_level": "raid1", 00:37:02.972 "superblock": true, 00:37:02.972 "num_base_bdevs": 2, 00:37:02.972 "num_base_bdevs_discovered": 1, 00:37:02.972 "num_base_bdevs_operational": 1, 00:37:02.972 "base_bdevs_list": [ 00:37:02.972 { 00:37:02.972 "name": null, 00:37:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.972 "is_configured": false, 00:37:02.972 "data_offset": 256, 00:37:02.972 "data_size": 7936 00:37:02.972 }, 00:37:02.972 { 00:37:02.972 "name": "BaseBdev2", 00:37:02.972 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:37:02.972 "is_configured": true, 00:37:02.972 "data_offset": 256, 00:37:02.972 "data_size": 7936 00:37:02.972 } 00:37:02.972 ] 00:37:02.972 }' 00:37:02.972 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:02.972 11:48:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:03.537 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:03.537 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:03.537 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:03.537 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:03.537 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:03.537 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:03.537 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:03.795 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:03.795 "name": "raid_bdev1", 00:37:03.795 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:37:03.795 "strip_size_kb": 0, 00:37:03.795 "state": "online", 00:37:03.795 "raid_level": "raid1", 00:37:03.795 "superblock": true, 00:37:03.795 "num_base_bdevs": 2, 00:37:03.795 "num_base_bdevs_discovered": 1, 00:37:03.795 "num_base_bdevs_operational": 1, 00:37:03.795 "base_bdevs_list": [ 00:37:03.795 { 00:37:03.795 "name": null, 00:37:03.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:03.795 "is_configured": false, 00:37:03.795 "data_offset": 256, 00:37:03.795 "data_size": 7936 00:37:03.795 }, 00:37:03.795 { 00:37:03.795 "name": "BaseBdev2", 00:37:03.795 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:37:03.795 "is_configured": true, 00:37:03.795 "data_offset": 256, 00:37:03.795 "data_size": 7936 00:37:03.795 } 00:37:03.795 ] 00:37:03.795 }' 00:37:03.795 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:03.795 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:03.795 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:03.795 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:03.795 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:37:04.052 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:04.310 [2024-07-13 11:48:38.945611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:04.310 [2024-07-13 11:48:38.945694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:04.310 [2024-07-13 11:48:38.945734] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:37:04.310 [2024-07-13 11:48:38.945759] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:04.310 [2024-07-13 11:48:38.946246] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:04.310 [2024-07-13 11:48:38.946293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:04.310 [2024-07-13 11:48:38.946403] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:04.310 [2024-07-13 11:48:38.946419] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:04.310 [2024-07-13 11:48:38.946427] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:04.310 BaseBdev1 00:37:04.310 11:48:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:37:05.244 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:05.244 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:05.244 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:05.245 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:05.245 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:05.245 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:05.245 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:05.245 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:05.245 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:05.245 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:05.245 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:05.245 11:48:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.502 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:05.502 "name": "raid_bdev1", 00:37:05.502 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:37:05.502 "strip_size_kb": 0, 00:37:05.502 "state": "online", 00:37:05.502 "raid_level": "raid1", 00:37:05.502 "superblock": true, 00:37:05.502 "num_base_bdevs": 2, 00:37:05.502 "num_base_bdevs_discovered": 1, 00:37:05.502 "num_base_bdevs_operational": 1, 00:37:05.502 "base_bdevs_list": [ 00:37:05.502 { 00:37:05.503 "name": null, 00:37:05.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:05.503 "is_configured": false, 00:37:05.503 "data_offset": 256, 00:37:05.503 "data_size": 7936 00:37:05.503 }, 00:37:05.503 { 00:37:05.503 "name": "BaseBdev2", 00:37:05.503 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:37:05.503 "is_configured": true, 00:37:05.503 "data_offset": 256, 00:37:05.503 "data_size": 7936 00:37:05.503 } 00:37:05.503 ] 00:37:05.503 }' 00:37:05.503 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:05.503 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:06.436 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:06.436 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:06.436 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:06.436 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:06.436 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:06.436 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.436 11:48:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:06.436 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:06.436 "name": "raid_bdev1", 00:37:06.436 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:37:06.436 "strip_size_kb": 0, 00:37:06.436 "state": "online", 00:37:06.436 "raid_level": "raid1", 00:37:06.436 "superblock": true, 00:37:06.436 "num_base_bdevs": 2, 00:37:06.436 "num_base_bdevs_discovered": 1, 00:37:06.436 "num_base_bdevs_operational": 1, 00:37:06.436 "base_bdevs_list": [ 00:37:06.436 { 00:37:06.436 "name": null, 00:37:06.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.436 "is_configured": false, 00:37:06.436 "data_offset": 256, 00:37:06.436 "data_size": 7936 00:37:06.436 }, 00:37:06.436 { 00:37:06.436 "name": "BaseBdev2", 00:37:06.436 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:37:06.436 "is_configured": true, 00:37:06.436 "data_offset": 256, 00:37:06.436 "data_size": 7936 00:37:06.436 } 00:37:06.436 ] 00:37:06.436 }' 00:37:06.436 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:06.436 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:06.436 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:06.694 [2024-07-13 11:48:41.414007] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:06.694 [2024-07-13 11:48:41.414117] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:06.694 [2024-07-13 11:48:41.414155] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:06.694 request: 00:37:06.694 { 00:37:06.694 "base_bdev": "BaseBdev1", 00:37:06.694 "raid_bdev": "raid_bdev1", 00:37:06.694 "method": "bdev_raid_add_base_bdev", 00:37:06.694 "req_id": 1 00:37:06.694 } 00:37:06.694 Got JSON-RPC error response 00:37:06.694 response: 00:37:06.694 { 00:37:06.694 "code": -22, 00:37:06.694 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:06.694 } 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:06.694 11:48:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:08.070 "name": "raid_bdev1", 00:37:08.070 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:37:08.070 "strip_size_kb": 0, 00:37:08.070 "state": "online", 00:37:08.070 "raid_level": "raid1", 00:37:08.070 "superblock": true, 00:37:08.070 "num_base_bdevs": 2, 00:37:08.070 "num_base_bdevs_discovered": 1, 00:37:08.070 "num_base_bdevs_operational": 1, 00:37:08.070 "base_bdevs_list": [ 00:37:08.070 { 00:37:08.070 "name": null, 00:37:08.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.070 "is_configured": false, 00:37:08.070 "data_offset": 256, 00:37:08.070 "data_size": 7936 00:37:08.070 }, 00:37:08.070 { 00:37:08.070 "name": "BaseBdev2", 00:37:08.070 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:37:08.070 "is_configured": true, 00:37:08.070 "data_offset": 256, 00:37:08.070 "data_size": 7936 00:37:08.070 } 00:37:08.070 ] 00:37:08.070 }' 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:08.070 11:48:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:08.640 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:08.640 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:08.640 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:08.640 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:08.640 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:08.640 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:08.640 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:08.897 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:08.897 "name": "raid_bdev1", 00:37:08.897 "uuid": "2174b2aa-def6-4f7d-b93c-3ed2b5d05871", 00:37:08.897 "strip_size_kb": 0, 00:37:08.897 "state": "online", 00:37:08.897 "raid_level": "raid1", 00:37:08.897 "superblock": true, 00:37:08.897 "num_base_bdevs": 2, 00:37:08.897 "num_base_bdevs_discovered": 1, 00:37:08.897 "num_base_bdevs_operational": 1, 00:37:08.897 "base_bdevs_list": [ 00:37:08.897 { 00:37:08.897 "name": null, 00:37:08.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.897 "is_configured": false, 00:37:08.897 "data_offset": 256, 00:37:08.897 "data_size": 7936 00:37:08.897 }, 00:37:08.897 { 00:37:08.897 "name": "BaseBdev2", 00:37:08.897 "uuid": "b4127b0c-c91c-5332-b4fe-a355b8c3fc8f", 00:37:08.897 "is_configured": true, 00:37:08.897 "data_offset": 256, 00:37:08.897 "data_size": 7936 00:37:08.897 } 00:37:08.897 ] 00:37:08.897 }' 00:37:08.897 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:08.897 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:08.897 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 161219 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 161219 ']' 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 161219 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 161219 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 161219' 00:37:09.156 killing process with pid 161219 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 161219 00:37:09.156 11:48:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 161219 00:37:09.156 Received shutdown signal, test time was about 60.000000 seconds 00:37:09.156 00:37:09.156 Latency(us) 00:37:09.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:09.156 =================================================================================================================== 00:37:09.156 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:09.156 [2024-07-13 11:48:43.685681] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:09.156 [2024-07-13 11:48:43.685776] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:09.156 [2024-07-13 11:48:43.685819] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:09.156 [2024-07-13 11:48:43.685837] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:37:09.156 [2024-07-13 11:48:43.887343] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:10.533 ************************************ 00:37:10.533 END TEST raid_rebuild_test_sb_4k 00:37:10.533 11:48:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:37:10.533 00:37:10.533 real 0m33.120s 00:37:10.533 user 0m53.037s 00:37:10.533 sys 0m3.462s 00:37:10.533 11:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:10.533 11:48:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:37:10.533 ************************************ 00:37:10.533 11:48:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:10.533 11:48:44 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:37:10.533 11:48:44 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:37:10.533 11:48:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:37:10.533 11:48:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:10.533 11:48:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:10.533 ************************************ 00:37:10.533 START TEST raid_state_function_test_sb_md_separate 00:37:10.533 ************************************ 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:37:10.533 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=162167 00:37:10.534 Process raid pid: 162167 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 162167' 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 162167 /var/tmp/spdk-raid.sock 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 162167 ']' 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:10.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:10.534 11:48:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:10.534 [2024-07-13 11:48:45.043835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:10.534 [2024-07-13 11:48:45.044638] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:10.534 [2024-07-13 11:48:45.214856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.793 [2024-07-13 11:48:45.405805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.052 [2024-07-13 11:48:45.596086] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:11.310 11:48:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:11.310 11:48:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:37:11.310 11:48:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:11.569 [2024-07-13 11:48:46.282829] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:11.569 [2024-07-13 11:48:46.282932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:11.569 [2024-07-13 11:48:46.282947] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:11.569 [2024-07-13 11:48:46.282975] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:11.569 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:11.827 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:11.827 "name": "Existed_Raid", 00:37:11.827 "uuid": "2fa0c9e2-b021-4340-a257-f25aff5eac2d", 00:37:11.827 "strip_size_kb": 0, 00:37:11.827 "state": "configuring", 00:37:11.827 "raid_level": "raid1", 00:37:11.827 "superblock": true, 00:37:11.827 "num_base_bdevs": 2, 00:37:11.827 "num_base_bdevs_discovered": 0, 00:37:11.827 "num_base_bdevs_operational": 2, 00:37:11.827 "base_bdevs_list": [ 00:37:11.827 { 00:37:11.827 "name": "BaseBdev1", 00:37:11.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:11.827 "is_configured": false, 00:37:11.827 "data_offset": 0, 00:37:11.827 "data_size": 0 00:37:11.827 }, 00:37:11.827 { 00:37:11.827 "name": "BaseBdev2", 00:37:11.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:11.827 "is_configured": false, 00:37:11.827 "data_offset": 0, 00:37:11.827 "data_size": 0 00:37:11.827 } 00:37:11.827 ] 00:37:11.827 }' 00:37:11.827 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:11.827 11:48:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:12.763 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:12.763 [2024-07-13 11:48:47.370828] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:12.763 [2024-07-13 11:48:47.370867] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:37:12.763 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:13.022 [2024-07-13 11:48:47.622892] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:13.022 [2024-07-13 11:48:47.622935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:13.022 [2024-07-13 11:48:47.622945] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:13.022 [2024-07-13 11:48:47.622968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:13.022 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:37:13.280 [2024-07-13 11:48:47.845167] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:13.280 BaseBdev1 00:37:13.281 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:37:13.281 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:37:13.281 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:13.281 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:37:13.281 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:13.281 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:13.281 11:48:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:13.539 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:13.797 [ 00:37:13.797 { 00:37:13.797 "name": "BaseBdev1", 00:37:13.797 "aliases": [ 00:37:13.797 "619529f4-38e7-4b12-9b0a-a17674285c0b" 00:37:13.797 ], 00:37:13.797 "product_name": "Malloc disk", 00:37:13.797 "block_size": 4096, 00:37:13.797 "num_blocks": 8192, 00:37:13.797 "uuid": "619529f4-38e7-4b12-9b0a-a17674285c0b", 00:37:13.797 "md_size": 32, 00:37:13.797 "md_interleave": false, 00:37:13.797 "dif_type": 0, 00:37:13.797 "assigned_rate_limits": { 00:37:13.797 "rw_ios_per_sec": 0, 00:37:13.797 "rw_mbytes_per_sec": 0, 00:37:13.797 "r_mbytes_per_sec": 0, 00:37:13.797 "w_mbytes_per_sec": 0 00:37:13.797 }, 00:37:13.797 "claimed": true, 00:37:13.798 "claim_type": "exclusive_write", 00:37:13.798 "zoned": false, 00:37:13.798 "supported_io_types": { 00:37:13.798 "read": true, 00:37:13.798 "write": true, 00:37:13.798 "unmap": true, 00:37:13.798 "flush": true, 00:37:13.798 "reset": true, 00:37:13.798 "nvme_admin": false, 00:37:13.798 "nvme_io": false, 00:37:13.798 "nvme_io_md": false, 00:37:13.798 "write_zeroes": true, 00:37:13.798 "zcopy": true, 00:37:13.798 "get_zone_info": false, 00:37:13.798 "zone_management": false, 00:37:13.798 "zone_append": false, 00:37:13.798 "compare": false, 00:37:13.798 "compare_and_write": false, 00:37:13.798 "abort": true, 00:37:13.798 "seek_hole": false, 00:37:13.798 "seek_data": false, 00:37:13.798 "copy": true, 00:37:13.798 "nvme_iov_md": false 00:37:13.798 }, 00:37:13.798 "memory_domains": [ 00:37:13.798 { 00:37:13.798 "dma_device_id": "system", 00:37:13.798 "dma_device_type": 1 00:37:13.798 }, 00:37:13.798 { 00:37:13.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:13.798 "dma_device_type": 2 00:37:13.798 } 00:37:13.798 ], 00:37:13.798 "driver_specific": {} 00:37:13.798 } 00:37:13.798 ] 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:13.798 "name": "Existed_Raid", 00:37:13.798 "uuid": "eacfa892-0925-47d4-85c7-60c83ca55b7e", 00:37:13.798 "strip_size_kb": 0, 00:37:13.798 "state": "configuring", 00:37:13.798 "raid_level": "raid1", 00:37:13.798 "superblock": true, 00:37:13.798 "num_base_bdevs": 2, 00:37:13.798 "num_base_bdevs_discovered": 1, 00:37:13.798 "num_base_bdevs_operational": 2, 00:37:13.798 "base_bdevs_list": [ 00:37:13.798 { 00:37:13.798 "name": "BaseBdev1", 00:37:13.798 "uuid": "619529f4-38e7-4b12-9b0a-a17674285c0b", 00:37:13.798 "is_configured": true, 00:37:13.798 "data_offset": 256, 00:37:13.798 "data_size": 7936 00:37:13.798 }, 00:37:13.798 { 00:37:13.798 "name": "BaseBdev2", 00:37:13.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:13.798 "is_configured": false, 00:37:13.798 "data_offset": 0, 00:37:13.798 "data_size": 0 00:37:13.798 } 00:37:13.798 ] 00:37:13.798 }' 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:13.798 11:48:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:14.365 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:37:14.624 [2024-07-13 11:48:49.277403] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:14.624 [2024-07-13 11:48:49.277441] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:37:14.624 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:37:14.882 [2024-07-13 11:48:49.529503] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:14.882 [2024-07-13 11:48:49.531413] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:14.882 [2024-07-13 11:48:49.531467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:14.882 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:15.141 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:15.141 "name": "Existed_Raid", 00:37:15.141 "uuid": "dba90fe8-7489-4661-a181-68094fc2f2aa", 00:37:15.141 "strip_size_kb": 0, 00:37:15.141 "state": "configuring", 00:37:15.141 "raid_level": "raid1", 00:37:15.141 "superblock": true, 00:37:15.141 "num_base_bdevs": 2, 00:37:15.141 "num_base_bdevs_discovered": 1, 00:37:15.141 "num_base_bdevs_operational": 2, 00:37:15.141 "base_bdevs_list": [ 00:37:15.142 { 00:37:15.142 "name": "BaseBdev1", 00:37:15.142 "uuid": "619529f4-38e7-4b12-9b0a-a17674285c0b", 00:37:15.142 "is_configured": true, 00:37:15.142 "data_offset": 256, 00:37:15.142 "data_size": 7936 00:37:15.142 }, 00:37:15.142 { 00:37:15.142 "name": "BaseBdev2", 00:37:15.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:15.142 "is_configured": false, 00:37:15.142 "data_offset": 0, 00:37:15.142 "data_size": 0 00:37:15.142 } 00:37:15.142 ] 00:37:15.142 }' 00:37:15.142 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:15.142 11:48:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:15.708 11:48:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:37:15.966 [2024-07-13 11:48:50.692002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:15.966 [2024-07-13 11:48:50.692196] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:37:15.966 [2024-07-13 11:48:50.692221] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:15.966 [2024-07-13 11:48:50.692374] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:37:15.966 BaseBdev2 00:37:15.966 [2024-07-13 11:48:50.692503] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:37:15.966 [2024-07-13 11:48:50.692524] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:37:15.966 [2024-07-13 11:48:50.692626] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:15.966 11:48:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:37:15.966 11:48:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:37:15.966 11:48:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:37:15.966 11:48:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:37:15.966 11:48:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:37:15.966 11:48:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:37:15.966 11:48:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:37:16.222 11:48:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:16.481 [ 00:37:16.481 { 00:37:16.481 "name": "BaseBdev2", 00:37:16.481 "aliases": [ 00:37:16.481 "c07e1c29-eacf-4c4c-8a07-0fe4d32426f3" 00:37:16.481 ], 00:37:16.481 "product_name": "Malloc disk", 00:37:16.481 "block_size": 4096, 00:37:16.481 "num_blocks": 8192, 00:37:16.481 "uuid": "c07e1c29-eacf-4c4c-8a07-0fe4d32426f3", 00:37:16.481 "md_size": 32, 00:37:16.481 "md_interleave": false, 00:37:16.481 "dif_type": 0, 00:37:16.481 "assigned_rate_limits": { 00:37:16.481 "rw_ios_per_sec": 0, 00:37:16.481 "rw_mbytes_per_sec": 0, 00:37:16.481 "r_mbytes_per_sec": 0, 00:37:16.481 "w_mbytes_per_sec": 0 00:37:16.481 }, 00:37:16.481 "claimed": true, 00:37:16.481 "claim_type": "exclusive_write", 00:37:16.481 "zoned": false, 00:37:16.481 "supported_io_types": { 00:37:16.481 "read": true, 00:37:16.481 "write": true, 00:37:16.481 "unmap": true, 00:37:16.481 "flush": true, 00:37:16.481 "reset": true, 00:37:16.481 "nvme_admin": false, 00:37:16.481 "nvme_io": false, 00:37:16.481 "nvme_io_md": false, 00:37:16.481 "write_zeroes": true, 00:37:16.481 "zcopy": true, 00:37:16.481 "get_zone_info": false, 00:37:16.481 "zone_management": false, 00:37:16.481 "zone_append": false, 00:37:16.481 "compare": false, 00:37:16.481 "compare_and_write": false, 00:37:16.481 "abort": true, 00:37:16.481 "seek_hole": false, 00:37:16.481 "seek_data": false, 00:37:16.481 "copy": true, 00:37:16.481 "nvme_iov_md": false 00:37:16.481 }, 00:37:16.481 "memory_domains": [ 00:37:16.481 { 00:37:16.481 "dma_device_id": "system", 00:37:16.481 "dma_device_type": 1 00:37:16.481 }, 00:37:16.481 { 00:37:16.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:16.481 "dma_device_type": 2 00:37:16.481 } 00:37:16.481 ], 00:37:16.481 "driver_specific": {} 00:37:16.481 } 00:37:16.481 ] 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:16.481 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:16.739 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:16.739 "name": "Existed_Raid", 00:37:16.739 "uuid": "dba90fe8-7489-4661-a181-68094fc2f2aa", 00:37:16.739 "strip_size_kb": 0, 00:37:16.739 "state": "online", 00:37:16.739 "raid_level": "raid1", 00:37:16.739 "superblock": true, 00:37:16.739 "num_base_bdevs": 2, 00:37:16.739 "num_base_bdevs_discovered": 2, 00:37:16.739 "num_base_bdevs_operational": 2, 00:37:16.739 "base_bdevs_list": [ 00:37:16.739 { 00:37:16.739 "name": "BaseBdev1", 00:37:16.739 "uuid": "619529f4-38e7-4b12-9b0a-a17674285c0b", 00:37:16.739 "is_configured": true, 00:37:16.739 "data_offset": 256, 00:37:16.739 "data_size": 7936 00:37:16.739 }, 00:37:16.739 { 00:37:16.739 "name": "BaseBdev2", 00:37:16.739 "uuid": "c07e1c29-eacf-4c4c-8a07-0fe4d32426f3", 00:37:16.739 "is_configured": true, 00:37:16.739 "data_offset": 256, 00:37:16.739 "data_size": 7936 00:37:16.739 } 00:37:16.739 ] 00:37:16.739 }' 00:37:16.739 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:16.739 11:48:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:17.305 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:37:17.305 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:37:17.305 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:17.305 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:17.305 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:17.305 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:37:17.305 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:37:17.305 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:17.563 [2024-07-13 11:48:52.284495] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:17.563 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:17.563 "name": "Existed_Raid", 00:37:17.563 "aliases": [ 00:37:17.563 "dba90fe8-7489-4661-a181-68094fc2f2aa" 00:37:17.563 ], 00:37:17.563 "product_name": "Raid Volume", 00:37:17.563 "block_size": 4096, 00:37:17.563 "num_blocks": 7936, 00:37:17.563 "uuid": "dba90fe8-7489-4661-a181-68094fc2f2aa", 00:37:17.563 "md_size": 32, 00:37:17.563 "md_interleave": false, 00:37:17.563 "dif_type": 0, 00:37:17.563 "assigned_rate_limits": { 00:37:17.563 "rw_ios_per_sec": 0, 00:37:17.563 "rw_mbytes_per_sec": 0, 00:37:17.563 "r_mbytes_per_sec": 0, 00:37:17.563 "w_mbytes_per_sec": 0 00:37:17.563 }, 00:37:17.563 "claimed": false, 00:37:17.563 "zoned": false, 00:37:17.563 "supported_io_types": { 00:37:17.563 "read": true, 00:37:17.563 "write": true, 00:37:17.563 "unmap": false, 00:37:17.563 "flush": false, 00:37:17.563 "reset": true, 00:37:17.563 "nvme_admin": false, 00:37:17.563 "nvme_io": false, 00:37:17.563 "nvme_io_md": false, 00:37:17.563 "write_zeroes": true, 00:37:17.563 "zcopy": false, 00:37:17.563 "get_zone_info": false, 00:37:17.563 "zone_management": false, 00:37:17.563 "zone_append": false, 00:37:17.563 "compare": false, 00:37:17.563 "compare_and_write": false, 00:37:17.563 "abort": false, 00:37:17.563 "seek_hole": false, 00:37:17.563 "seek_data": false, 00:37:17.563 "copy": false, 00:37:17.563 "nvme_iov_md": false 00:37:17.563 }, 00:37:17.563 "memory_domains": [ 00:37:17.563 { 00:37:17.563 "dma_device_id": "system", 00:37:17.563 "dma_device_type": 1 00:37:17.563 }, 00:37:17.563 { 00:37:17.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.564 "dma_device_type": 2 00:37:17.564 }, 00:37:17.564 { 00:37:17.564 "dma_device_id": "system", 00:37:17.564 "dma_device_type": 1 00:37:17.564 }, 00:37:17.564 { 00:37:17.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.564 "dma_device_type": 2 00:37:17.564 } 00:37:17.564 ], 00:37:17.564 "driver_specific": { 00:37:17.564 "raid": { 00:37:17.564 "uuid": "dba90fe8-7489-4661-a181-68094fc2f2aa", 00:37:17.564 "strip_size_kb": 0, 00:37:17.564 "state": "online", 00:37:17.564 "raid_level": "raid1", 00:37:17.564 "superblock": true, 00:37:17.564 "num_base_bdevs": 2, 00:37:17.564 "num_base_bdevs_discovered": 2, 00:37:17.564 "num_base_bdevs_operational": 2, 00:37:17.564 "base_bdevs_list": [ 00:37:17.564 { 00:37:17.564 "name": "BaseBdev1", 00:37:17.564 "uuid": "619529f4-38e7-4b12-9b0a-a17674285c0b", 00:37:17.564 "is_configured": true, 00:37:17.564 "data_offset": 256, 00:37:17.564 "data_size": 7936 00:37:17.564 }, 00:37:17.564 { 00:37:17.564 "name": "BaseBdev2", 00:37:17.564 "uuid": "c07e1c29-eacf-4c4c-8a07-0fe4d32426f3", 00:37:17.564 "is_configured": true, 00:37:17.564 "data_offset": 256, 00:37:17.564 "data_size": 7936 00:37:17.564 } 00:37:17.564 ] 00:37:17.564 } 00:37:17.564 } 00:37:17.564 }' 00:37:17.564 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:17.822 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:37:17.822 BaseBdev2' 00:37:17.822 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:17.822 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:37:17.822 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:17.822 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:17.822 "name": "BaseBdev1", 00:37:17.822 "aliases": [ 00:37:17.822 "619529f4-38e7-4b12-9b0a-a17674285c0b" 00:37:17.822 ], 00:37:17.822 "product_name": "Malloc disk", 00:37:17.822 "block_size": 4096, 00:37:17.822 "num_blocks": 8192, 00:37:17.822 "uuid": "619529f4-38e7-4b12-9b0a-a17674285c0b", 00:37:17.822 "md_size": 32, 00:37:17.822 "md_interleave": false, 00:37:17.822 "dif_type": 0, 00:37:17.822 "assigned_rate_limits": { 00:37:17.822 "rw_ios_per_sec": 0, 00:37:17.822 "rw_mbytes_per_sec": 0, 00:37:17.822 "r_mbytes_per_sec": 0, 00:37:17.822 "w_mbytes_per_sec": 0 00:37:17.822 }, 00:37:17.822 "claimed": true, 00:37:17.822 "claim_type": "exclusive_write", 00:37:17.822 "zoned": false, 00:37:17.822 "supported_io_types": { 00:37:17.822 "read": true, 00:37:17.822 "write": true, 00:37:17.822 "unmap": true, 00:37:17.822 "flush": true, 00:37:17.822 "reset": true, 00:37:17.822 "nvme_admin": false, 00:37:17.822 "nvme_io": false, 00:37:17.822 "nvme_io_md": false, 00:37:17.822 "write_zeroes": true, 00:37:17.822 "zcopy": true, 00:37:17.822 "get_zone_info": false, 00:37:17.822 "zone_management": false, 00:37:17.822 "zone_append": false, 00:37:17.822 "compare": false, 00:37:17.822 "compare_and_write": false, 00:37:17.822 "abort": true, 00:37:17.822 "seek_hole": false, 00:37:17.822 "seek_data": false, 00:37:17.822 "copy": true, 00:37:17.822 "nvme_iov_md": false 00:37:17.822 }, 00:37:17.822 "memory_domains": [ 00:37:17.822 { 00:37:17.822 "dma_device_id": "system", 00:37:17.822 "dma_device_type": 1 00:37:17.822 }, 00:37:17.822 { 00:37:17.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:17.822 "dma_device_type": 2 00:37:17.822 } 00:37:17.822 ], 00:37:17.822 "driver_specific": {} 00:37:17.822 }' 00:37:17.822 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:18.080 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:18.080 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:18.080 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:18.080 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:18.080 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:18.080 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:18.080 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:18.337 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:18.338 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:18.338 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:18.338 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:18.338 11:48:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:18.338 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:37:18.338 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:18.595 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:18.595 "name": "BaseBdev2", 00:37:18.595 "aliases": [ 00:37:18.595 "c07e1c29-eacf-4c4c-8a07-0fe4d32426f3" 00:37:18.595 ], 00:37:18.595 "product_name": "Malloc disk", 00:37:18.595 "block_size": 4096, 00:37:18.595 "num_blocks": 8192, 00:37:18.595 "uuid": "c07e1c29-eacf-4c4c-8a07-0fe4d32426f3", 00:37:18.595 "md_size": 32, 00:37:18.595 "md_interleave": false, 00:37:18.595 "dif_type": 0, 00:37:18.595 "assigned_rate_limits": { 00:37:18.595 "rw_ios_per_sec": 0, 00:37:18.595 "rw_mbytes_per_sec": 0, 00:37:18.595 "r_mbytes_per_sec": 0, 00:37:18.595 "w_mbytes_per_sec": 0 00:37:18.595 }, 00:37:18.595 "claimed": true, 00:37:18.595 "claim_type": "exclusive_write", 00:37:18.595 "zoned": false, 00:37:18.595 "supported_io_types": { 00:37:18.595 "read": true, 00:37:18.595 "write": true, 00:37:18.595 "unmap": true, 00:37:18.595 "flush": true, 00:37:18.595 "reset": true, 00:37:18.595 "nvme_admin": false, 00:37:18.595 "nvme_io": false, 00:37:18.595 "nvme_io_md": false, 00:37:18.595 "write_zeroes": true, 00:37:18.595 "zcopy": true, 00:37:18.595 "get_zone_info": false, 00:37:18.595 "zone_management": false, 00:37:18.595 "zone_append": false, 00:37:18.595 "compare": false, 00:37:18.595 "compare_and_write": false, 00:37:18.595 "abort": true, 00:37:18.595 "seek_hole": false, 00:37:18.595 "seek_data": false, 00:37:18.595 "copy": true, 00:37:18.595 "nvme_iov_md": false 00:37:18.595 }, 00:37:18.595 "memory_domains": [ 00:37:18.595 { 00:37:18.595 "dma_device_id": "system", 00:37:18.595 "dma_device_type": 1 00:37:18.595 }, 00:37:18.595 { 00:37:18.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:18.595 "dma_device_type": 2 00:37:18.595 } 00:37:18.595 ], 00:37:18.595 "driver_specific": {} 00:37:18.595 }' 00:37:18.595 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:18.595 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:18.595 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:18.595 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:18.853 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:18.853 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:18.853 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:18.853 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:18.853 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:18.853 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:18.853 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:19.111 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:19.112 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:37:19.112 [2024-07-13 11:48:53.864743] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:19.369 11:48:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:19.627 11:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:19.627 "name": "Existed_Raid", 00:37:19.627 "uuid": "dba90fe8-7489-4661-a181-68094fc2f2aa", 00:37:19.627 "strip_size_kb": 0, 00:37:19.627 "state": "online", 00:37:19.627 "raid_level": "raid1", 00:37:19.627 "superblock": true, 00:37:19.627 "num_base_bdevs": 2, 00:37:19.627 "num_base_bdevs_discovered": 1, 00:37:19.627 "num_base_bdevs_operational": 1, 00:37:19.627 "base_bdevs_list": [ 00:37:19.627 { 00:37:19.627 "name": null, 00:37:19.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:19.627 "is_configured": false, 00:37:19.627 "data_offset": 256, 00:37:19.627 "data_size": 7936 00:37:19.627 }, 00:37:19.627 { 00:37:19.627 "name": "BaseBdev2", 00:37:19.627 "uuid": "c07e1c29-eacf-4c4c-8a07-0fe4d32426f3", 00:37:19.627 "is_configured": true, 00:37:19.627 "data_offset": 256, 00:37:19.627 "data_size": 7936 00:37:19.627 } 00:37:19.627 ] 00:37:19.627 }' 00:37:19.627 11:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:19.627 11:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:20.193 11:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:37:20.193 11:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:20.193 11:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:20.193 11:48:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:37:20.451 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:37:20.451 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:20.451 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:37:20.710 [2024-07-13 11:48:55.258534] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:20.710 [2024-07-13 11:48:55.258707] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:20.710 [2024-07-13 11:48:55.364724] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:20.710 [2024-07-13 11:48:55.364771] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:20.710 [2024-07-13 11:48:55.364782] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:37:20.710 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:37:20.710 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:37:20.710 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:20.710 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 162167 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 162167 ']' 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 162167 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162167 00:37:20.969 killing process with pid 162167 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162167' 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 162167 00:37:20.969 11:48:55 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 162167 00:37:20.969 [2024-07-13 11:48:55.666819] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:20.969 [2024-07-13 11:48:55.667221] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:21.902 ************************************ 00:37:21.902 END TEST raid_state_function_test_sb_md_separate 00:37:21.902 ************************************ 00:37:21.902 11:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:37:21.902 00:37:21.902 real 0m11.610s 00:37:21.902 user 0m20.620s 00:37:21.902 sys 0m1.421s 00:37:21.902 11:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:21.902 11:48:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:21.902 11:48:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:21.902 11:48:56 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:37:21.902 11:48:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:37:21.902 11:48:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:21.902 11:48:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:21.902 ************************************ 00:37:21.902 START TEST raid_superblock_test_md_separate 00:37:21.902 ************************************ 00:37:21.902 11:48:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:37:21.902 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:37:21.902 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:37:21.902 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:37:21.902 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=162552 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 162552 /var/tmp/spdk-raid.sock 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 162552 ']' 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:21.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:21.903 11:48:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:22.161 [2024-07-13 11:48:56.711181] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:22.161 [2024-07-13 11:48:56.711562] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162552 ] 00:37:22.161 [2024-07-13 11:48:56.882993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.421 [2024-07-13 11:48:57.124630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.680 [2024-07-13 11:48:57.313836] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:22.938 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:37:23.197 malloc1 00:37:23.197 11:48:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:23.456 [2024-07-13 11:48:58.040556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:23.456 [2024-07-13 11:48:58.040810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.456 [2024-07-13 11:48:58.040991] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:37:23.456 [2024-07-13 11:48:58.041107] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.456 [2024-07-13 11:48:58.043160] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.456 [2024-07-13 11:48:58.043336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:23.456 pt1 00:37:23.456 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:23.456 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:23.456 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:37:23.456 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:37:23.456 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:23.456 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:23.456 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:37:23.456 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:23.456 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:37:23.715 malloc2 00:37:23.715 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:23.715 [2024-07-13 11:48:58.467773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:23.715 [2024-07-13 11:48:58.468010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.715 [2024-07-13 11:48:58.468205] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:37:23.715 [2024-07-13 11:48:58.468358] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.716 [2024-07-13 11:48:58.470679] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.716 [2024-07-13 11:48:58.470914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:23.974 pt2 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:37:23.974 [2024-07-13 11:48:58.715863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:23.974 [2024-07-13 11:48:58.717892] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:23.974 [2024-07-13 11:48:58.718227] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:37:23.974 [2024-07-13 11:48:58.718344] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:23.974 [2024-07-13 11:48:58.718486] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:37:23.974 [2024-07-13 11:48:58.718712] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:37:23.974 [2024-07-13 11:48:58.718821] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:37:23.974 [2024-07-13 11:48:58.719044] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:23.974 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:24.233 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.233 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.233 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:24.233 "name": "raid_bdev1", 00:37:24.233 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:24.233 "strip_size_kb": 0, 00:37:24.233 "state": "online", 00:37:24.233 "raid_level": "raid1", 00:37:24.233 "superblock": true, 00:37:24.233 "num_base_bdevs": 2, 00:37:24.233 "num_base_bdevs_discovered": 2, 00:37:24.233 "num_base_bdevs_operational": 2, 00:37:24.233 "base_bdevs_list": [ 00:37:24.233 { 00:37:24.233 "name": "pt1", 00:37:24.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:24.233 "is_configured": true, 00:37:24.233 "data_offset": 256, 00:37:24.233 "data_size": 7936 00:37:24.233 }, 00:37:24.233 { 00:37:24.233 "name": "pt2", 00:37:24.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:24.233 "is_configured": true, 00:37:24.233 "data_offset": 256, 00:37:24.233 "data_size": 7936 00:37:24.233 } 00:37:24.233 ] 00:37:24.233 }' 00:37:24.233 11:48:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:24.233 11:48:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:24.801 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:37:24.801 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:24.801 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:24.801 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:24.801 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:24.801 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:37:24.801 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:24.801 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:25.061 [2024-07-13 11:48:59.712184] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:25.061 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:25.061 "name": "raid_bdev1", 00:37:25.061 "aliases": [ 00:37:25.061 "2b2f4274-720e-4ad8-a73d-bfd569c4f072" 00:37:25.061 ], 00:37:25.061 "product_name": "Raid Volume", 00:37:25.061 "block_size": 4096, 00:37:25.061 "num_blocks": 7936, 00:37:25.061 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:25.061 "md_size": 32, 00:37:25.061 "md_interleave": false, 00:37:25.061 "dif_type": 0, 00:37:25.061 "assigned_rate_limits": { 00:37:25.061 "rw_ios_per_sec": 0, 00:37:25.061 "rw_mbytes_per_sec": 0, 00:37:25.061 "r_mbytes_per_sec": 0, 00:37:25.061 "w_mbytes_per_sec": 0 00:37:25.061 }, 00:37:25.061 "claimed": false, 00:37:25.061 "zoned": false, 00:37:25.061 "supported_io_types": { 00:37:25.061 "read": true, 00:37:25.061 "write": true, 00:37:25.061 "unmap": false, 00:37:25.061 "flush": false, 00:37:25.061 "reset": true, 00:37:25.061 "nvme_admin": false, 00:37:25.061 "nvme_io": false, 00:37:25.061 "nvme_io_md": false, 00:37:25.061 "write_zeroes": true, 00:37:25.061 "zcopy": false, 00:37:25.061 "get_zone_info": false, 00:37:25.061 "zone_management": false, 00:37:25.061 "zone_append": false, 00:37:25.061 "compare": false, 00:37:25.061 "compare_and_write": false, 00:37:25.061 "abort": false, 00:37:25.061 "seek_hole": false, 00:37:25.061 "seek_data": false, 00:37:25.061 "copy": false, 00:37:25.061 "nvme_iov_md": false 00:37:25.061 }, 00:37:25.061 "memory_domains": [ 00:37:25.061 { 00:37:25.061 "dma_device_id": "system", 00:37:25.061 "dma_device_type": 1 00:37:25.061 }, 00:37:25.061 { 00:37:25.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:25.061 "dma_device_type": 2 00:37:25.061 }, 00:37:25.061 { 00:37:25.061 "dma_device_id": "system", 00:37:25.061 "dma_device_type": 1 00:37:25.061 }, 00:37:25.061 { 00:37:25.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:25.061 "dma_device_type": 2 00:37:25.061 } 00:37:25.061 ], 00:37:25.061 "driver_specific": { 00:37:25.061 "raid": { 00:37:25.061 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:25.061 "strip_size_kb": 0, 00:37:25.061 "state": "online", 00:37:25.061 "raid_level": "raid1", 00:37:25.061 "superblock": true, 00:37:25.061 "num_base_bdevs": 2, 00:37:25.061 "num_base_bdevs_discovered": 2, 00:37:25.061 "num_base_bdevs_operational": 2, 00:37:25.061 "base_bdevs_list": [ 00:37:25.061 { 00:37:25.061 "name": "pt1", 00:37:25.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:25.061 "is_configured": true, 00:37:25.061 "data_offset": 256, 00:37:25.061 "data_size": 7936 00:37:25.061 }, 00:37:25.061 { 00:37:25.061 "name": "pt2", 00:37:25.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:25.061 "is_configured": true, 00:37:25.061 "data_offset": 256, 00:37:25.061 "data_size": 7936 00:37:25.061 } 00:37:25.061 ] 00:37:25.061 } 00:37:25.061 } 00:37:25.061 }' 00:37:25.061 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:25.061 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:25.061 pt2' 00:37:25.061 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:25.061 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:25.061 11:48:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:25.321 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:25.321 "name": "pt1", 00:37:25.321 "aliases": [ 00:37:25.321 "00000000-0000-0000-0000-000000000001" 00:37:25.321 ], 00:37:25.321 "product_name": "passthru", 00:37:25.321 "block_size": 4096, 00:37:25.321 "num_blocks": 8192, 00:37:25.321 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:25.321 "md_size": 32, 00:37:25.321 "md_interleave": false, 00:37:25.321 "dif_type": 0, 00:37:25.321 "assigned_rate_limits": { 00:37:25.321 "rw_ios_per_sec": 0, 00:37:25.321 "rw_mbytes_per_sec": 0, 00:37:25.321 "r_mbytes_per_sec": 0, 00:37:25.321 "w_mbytes_per_sec": 0 00:37:25.321 }, 00:37:25.321 "claimed": true, 00:37:25.321 "claim_type": "exclusive_write", 00:37:25.321 "zoned": false, 00:37:25.321 "supported_io_types": { 00:37:25.321 "read": true, 00:37:25.321 "write": true, 00:37:25.321 "unmap": true, 00:37:25.321 "flush": true, 00:37:25.321 "reset": true, 00:37:25.321 "nvme_admin": false, 00:37:25.321 "nvme_io": false, 00:37:25.321 "nvme_io_md": false, 00:37:25.321 "write_zeroes": true, 00:37:25.321 "zcopy": true, 00:37:25.321 "get_zone_info": false, 00:37:25.321 "zone_management": false, 00:37:25.321 "zone_append": false, 00:37:25.321 "compare": false, 00:37:25.321 "compare_and_write": false, 00:37:25.321 "abort": true, 00:37:25.321 "seek_hole": false, 00:37:25.321 "seek_data": false, 00:37:25.321 "copy": true, 00:37:25.321 "nvme_iov_md": false 00:37:25.321 }, 00:37:25.321 "memory_domains": [ 00:37:25.321 { 00:37:25.321 "dma_device_id": "system", 00:37:25.321 "dma_device_type": 1 00:37:25.321 }, 00:37:25.321 { 00:37:25.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:25.321 "dma_device_type": 2 00:37:25.321 } 00:37:25.321 ], 00:37:25.321 "driver_specific": { 00:37:25.321 "passthru": { 00:37:25.321 "name": "pt1", 00:37:25.321 "base_bdev_name": "malloc1" 00:37:25.321 } 00:37:25.321 } 00:37:25.321 }' 00:37:25.321 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:25.580 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:25.580 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:25.580 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:25.580 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:25.580 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:25.580 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:25.580 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:25.837 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:25.837 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:25.837 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:25.837 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:25.837 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:25.837 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:25.837 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:26.095 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:26.095 "name": "pt2", 00:37:26.095 "aliases": [ 00:37:26.095 "00000000-0000-0000-0000-000000000002" 00:37:26.095 ], 00:37:26.095 "product_name": "passthru", 00:37:26.095 "block_size": 4096, 00:37:26.095 "num_blocks": 8192, 00:37:26.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:26.095 "md_size": 32, 00:37:26.095 "md_interleave": false, 00:37:26.095 "dif_type": 0, 00:37:26.095 "assigned_rate_limits": { 00:37:26.095 "rw_ios_per_sec": 0, 00:37:26.095 "rw_mbytes_per_sec": 0, 00:37:26.095 "r_mbytes_per_sec": 0, 00:37:26.095 "w_mbytes_per_sec": 0 00:37:26.095 }, 00:37:26.095 "claimed": true, 00:37:26.095 "claim_type": "exclusive_write", 00:37:26.095 "zoned": false, 00:37:26.095 "supported_io_types": { 00:37:26.095 "read": true, 00:37:26.095 "write": true, 00:37:26.095 "unmap": true, 00:37:26.095 "flush": true, 00:37:26.095 "reset": true, 00:37:26.095 "nvme_admin": false, 00:37:26.095 "nvme_io": false, 00:37:26.095 "nvme_io_md": false, 00:37:26.095 "write_zeroes": true, 00:37:26.095 "zcopy": true, 00:37:26.095 "get_zone_info": false, 00:37:26.095 "zone_management": false, 00:37:26.095 "zone_append": false, 00:37:26.095 "compare": false, 00:37:26.095 "compare_and_write": false, 00:37:26.095 "abort": true, 00:37:26.095 "seek_hole": false, 00:37:26.095 "seek_data": false, 00:37:26.095 "copy": true, 00:37:26.095 "nvme_iov_md": false 00:37:26.095 }, 00:37:26.095 "memory_domains": [ 00:37:26.095 { 00:37:26.095 "dma_device_id": "system", 00:37:26.095 "dma_device_type": 1 00:37:26.095 }, 00:37:26.095 { 00:37:26.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:26.095 "dma_device_type": 2 00:37:26.095 } 00:37:26.095 ], 00:37:26.095 "driver_specific": { 00:37:26.095 "passthru": { 00:37:26.095 "name": "pt2", 00:37:26.095 "base_bdev_name": "malloc2" 00:37:26.095 } 00:37:26.095 } 00:37:26.095 }' 00:37:26.095 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:26.095 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:26.095 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:26.095 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:26.095 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:26.095 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:26.095 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:26.353 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:26.353 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:26.353 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:26.353 11:49:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:26.353 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:26.353 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:37:26.353 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:26.610 [2024-07-13 11:49:01.244541] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:26.611 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2b2f4274-720e-4ad8-a73d-bfd569c4f072 00:37:26.611 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 2b2f4274-720e-4ad8-a73d-bfd569c4f072 ']' 00:37:26.611 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:26.868 [2024-07-13 11:49:01.484302] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:26.868 [2024-07-13 11:49:01.484431] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:26.868 [2024-07-13 11:49:01.484617] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:26.868 [2024-07-13 11:49:01.484791] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:26.868 [2024-07-13 11:49:01.484894] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:37:26.868 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.868 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:37:27.126 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:37:27.126 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:37:27.126 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:27.126 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:27.126 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:37:27.126 11:49:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:27.384 11:49:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:37:27.384 11:49:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:27.642 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:27.900 [2024-07-13 11:49:02.576484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:27.900 [2024-07-13 11:49:02.578195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:27.900 [2024-07-13 11:49:02.578383] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:27.900 [2024-07-13 11:49:02.578582] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:27.900 [2024-07-13 11:49:02.578720] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:27.900 [2024-07-13 11:49:02.578762] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:37:27.900 request: 00:37:27.900 { 00:37:27.900 "name": "raid_bdev1", 00:37:27.900 "raid_level": "raid1", 00:37:27.900 "base_bdevs": [ 00:37:27.900 "malloc1", 00:37:27.900 "malloc2" 00:37:27.900 ], 00:37:27.900 "superblock": false, 00:37:27.900 "method": "bdev_raid_create", 00:37:27.900 "req_id": 1 00:37:27.900 } 00:37:27.900 Got JSON-RPC error response 00:37:27.900 response: 00:37:27.900 { 00:37:27.900 "code": -17, 00:37:27.900 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:27.900 } 00:37:27.900 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:37:27.900 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:27.900 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:27.900 11:49:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:27.900 11:49:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:27.900 11:49:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:37:28.158 11:49:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:37:28.158 11:49:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:37:28.158 11:49:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:28.423 [2024-07-13 11:49:03.036559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:28.423 [2024-07-13 11:49:03.036762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:28.423 [2024-07-13 11:49:03.036823] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:37:28.423 [2024-07-13 11:49:03.036942] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:28.423 [2024-07-13 11:49:03.038744] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:28.423 [2024-07-13 11:49:03.038970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:28.423 [2024-07-13 11:49:03.039163] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:28.423 [2024-07-13 11:49:03.039336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:28.423 pt1 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:28.423 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:28.702 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:28.702 "name": "raid_bdev1", 00:37:28.702 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:28.702 "strip_size_kb": 0, 00:37:28.702 "state": "configuring", 00:37:28.702 "raid_level": "raid1", 00:37:28.702 "superblock": true, 00:37:28.702 "num_base_bdevs": 2, 00:37:28.702 "num_base_bdevs_discovered": 1, 00:37:28.702 "num_base_bdevs_operational": 2, 00:37:28.702 "base_bdevs_list": [ 00:37:28.702 { 00:37:28.702 "name": "pt1", 00:37:28.702 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:28.702 "is_configured": true, 00:37:28.702 "data_offset": 256, 00:37:28.702 "data_size": 7936 00:37:28.702 }, 00:37:28.702 { 00:37:28.702 "name": null, 00:37:28.702 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:28.702 "is_configured": false, 00:37:28.702 "data_offset": 256, 00:37:28.702 "data_size": 7936 00:37:28.702 } 00:37:28.702 ] 00:37:28.702 }' 00:37:28.702 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:28.702 11:49:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:29.298 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:37:29.298 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:37:29.298 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:29.298 11:49:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:29.557 [2024-07-13 11:49:04.100779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:29.557 [2024-07-13 11:49:04.100991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:29.557 [2024-07-13 11:49:04.101054] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:37:29.557 [2024-07-13 11:49:04.101173] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:29.557 [2024-07-13 11:49:04.101471] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:29.557 [2024-07-13 11:49:04.101632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:29.557 [2024-07-13 11:49:04.101818] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:29.557 [2024-07-13 11:49:04.101933] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:29.557 [2024-07-13 11:49:04.102057] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:37:29.557 [2024-07-13 11:49:04.102207] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:29.557 [2024-07-13 11:49:04.102334] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:37:29.557 [2024-07-13 11:49:04.102596] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:37:29.557 [2024-07-13 11:49:04.102706] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:37:29.557 [2024-07-13 11:49:04.102914] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:29.557 pt2 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:29.557 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.815 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:29.815 "name": "raid_bdev1", 00:37:29.815 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:29.815 "strip_size_kb": 0, 00:37:29.815 "state": "online", 00:37:29.815 "raid_level": "raid1", 00:37:29.815 "superblock": true, 00:37:29.815 "num_base_bdevs": 2, 00:37:29.815 "num_base_bdevs_discovered": 2, 00:37:29.815 "num_base_bdevs_operational": 2, 00:37:29.815 "base_bdevs_list": [ 00:37:29.815 { 00:37:29.815 "name": "pt1", 00:37:29.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:29.815 "is_configured": true, 00:37:29.815 "data_offset": 256, 00:37:29.815 "data_size": 7936 00:37:29.815 }, 00:37:29.815 { 00:37:29.815 "name": "pt2", 00:37:29.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:29.815 "is_configured": true, 00:37:29.815 "data_offset": 256, 00:37:29.815 "data_size": 7936 00:37:29.815 } 00:37:29.815 ] 00:37:29.815 }' 00:37:29.815 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:29.815 11:49:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:30.383 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:37:30.383 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:30.383 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:30.383 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:30.383 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:30.383 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:37:30.383 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:30.383 11:49:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:30.642 [2024-07-13 11:49:05.141199] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:30.642 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:30.642 "name": "raid_bdev1", 00:37:30.642 "aliases": [ 00:37:30.642 "2b2f4274-720e-4ad8-a73d-bfd569c4f072" 00:37:30.642 ], 00:37:30.642 "product_name": "Raid Volume", 00:37:30.642 "block_size": 4096, 00:37:30.642 "num_blocks": 7936, 00:37:30.642 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:30.642 "md_size": 32, 00:37:30.642 "md_interleave": false, 00:37:30.642 "dif_type": 0, 00:37:30.642 "assigned_rate_limits": { 00:37:30.642 "rw_ios_per_sec": 0, 00:37:30.642 "rw_mbytes_per_sec": 0, 00:37:30.642 "r_mbytes_per_sec": 0, 00:37:30.642 "w_mbytes_per_sec": 0 00:37:30.642 }, 00:37:30.642 "claimed": false, 00:37:30.642 "zoned": false, 00:37:30.642 "supported_io_types": { 00:37:30.642 "read": true, 00:37:30.642 "write": true, 00:37:30.642 "unmap": false, 00:37:30.642 "flush": false, 00:37:30.642 "reset": true, 00:37:30.642 "nvme_admin": false, 00:37:30.642 "nvme_io": false, 00:37:30.642 "nvme_io_md": false, 00:37:30.642 "write_zeroes": true, 00:37:30.642 "zcopy": false, 00:37:30.642 "get_zone_info": false, 00:37:30.642 "zone_management": false, 00:37:30.642 "zone_append": false, 00:37:30.642 "compare": false, 00:37:30.642 "compare_and_write": false, 00:37:30.642 "abort": false, 00:37:30.642 "seek_hole": false, 00:37:30.642 "seek_data": false, 00:37:30.642 "copy": false, 00:37:30.642 "nvme_iov_md": false 00:37:30.642 }, 00:37:30.642 "memory_domains": [ 00:37:30.642 { 00:37:30.642 "dma_device_id": "system", 00:37:30.642 "dma_device_type": 1 00:37:30.642 }, 00:37:30.642 { 00:37:30.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:30.642 "dma_device_type": 2 00:37:30.642 }, 00:37:30.642 { 00:37:30.642 "dma_device_id": "system", 00:37:30.642 "dma_device_type": 1 00:37:30.642 }, 00:37:30.642 { 00:37:30.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:30.642 "dma_device_type": 2 00:37:30.642 } 00:37:30.642 ], 00:37:30.642 "driver_specific": { 00:37:30.642 "raid": { 00:37:30.642 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:30.642 "strip_size_kb": 0, 00:37:30.642 "state": "online", 00:37:30.642 "raid_level": "raid1", 00:37:30.642 "superblock": true, 00:37:30.642 "num_base_bdevs": 2, 00:37:30.642 "num_base_bdevs_discovered": 2, 00:37:30.642 "num_base_bdevs_operational": 2, 00:37:30.642 "base_bdevs_list": [ 00:37:30.642 { 00:37:30.642 "name": "pt1", 00:37:30.642 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:30.642 "is_configured": true, 00:37:30.642 "data_offset": 256, 00:37:30.642 "data_size": 7936 00:37:30.642 }, 00:37:30.642 { 00:37:30.642 "name": "pt2", 00:37:30.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:30.642 "is_configured": true, 00:37:30.642 "data_offset": 256, 00:37:30.642 "data_size": 7936 00:37:30.642 } 00:37:30.642 ] 00:37:30.642 } 00:37:30.642 } 00:37:30.642 }' 00:37:30.643 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:30.643 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:30.643 pt2' 00:37:30.643 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:30.643 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:30.643 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:30.901 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:30.901 "name": "pt1", 00:37:30.901 "aliases": [ 00:37:30.901 "00000000-0000-0000-0000-000000000001" 00:37:30.901 ], 00:37:30.901 "product_name": "passthru", 00:37:30.901 "block_size": 4096, 00:37:30.901 "num_blocks": 8192, 00:37:30.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:30.901 "md_size": 32, 00:37:30.901 "md_interleave": false, 00:37:30.901 "dif_type": 0, 00:37:30.901 "assigned_rate_limits": { 00:37:30.901 "rw_ios_per_sec": 0, 00:37:30.901 "rw_mbytes_per_sec": 0, 00:37:30.901 "r_mbytes_per_sec": 0, 00:37:30.901 "w_mbytes_per_sec": 0 00:37:30.901 }, 00:37:30.901 "claimed": true, 00:37:30.901 "claim_type": "exclusive_write", 00:37:30.901 "zoned": false, 00:37:30.901 "supported_io_types": { 00:37:30.901 "read": true, 00:37:30.901 "write": true, 00:37:30.901 "unmap": true, 00:37:30.901 "flush": true, 00:37:30.901 "reset": true, 00:37:30.901 "nvme_admin": false, 00:37:30.901 "nvme_io": false, 00:37:30.901 "nvme_io_md": false, 00:37:30.901 "write_zeroes": true, 00:37:30.901 "zcopy": true, 00:37:30.901 "get_zone_info": false, 00:37:30.901 "zone_management": false, 00:37:30.901 "zone_append": false, 00:37:30.901 "compare": false, 00:37:30.901 "compare_and_write": false, 00:37:30.901 "abort": true, 00:37:30.901 "seek_hole": false, 00:37:30.901 "seek_data": false, 00:37:30.901 "copy": true, 00:37:30.901 "nvme_iov_md": false 00:37:30.901 }, 00:37:30.901 "memory_domains": [ 00:37:30.901 { 00:37:30.901 "dma_device_id": "system", 00:37:30.901 "dma_device_type": 1 00:37:30.901 }, 00:37:30.901 { 00:37:30.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:30.901 "dma_device_type": 2 00:37:30.901 } 00:37:30.901 ], 00:37:30.901 "driver_specific": { 00:37:30.901 "passthru": { 00:37:30.901 "name": "pt1", 00:37:30.901 "base_bdev_name": "malloc1" 00:37:30.901 } 00:37:30.901 } 00:37:30.901 }' 00:37:30.901 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:30.901 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:30.901 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:30.901 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:30.901 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:31.159 11:49:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:31.726 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:31.726 "name": "pt2", 00:37:31.726 "aliases": [ 00:37:31.726 "00000000-0000-0000-0000-000000000002" 00:37:31.726 ], 00:37:31.726 "product_name": "passthru", 00:37:31.726 "block_size": 4096, 00:37:31.726 "num_blocks": 8192, 00:37:31.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:31.726 "md_size": 32, 00:37:31.726 "md_interleave": false, 00:37:31.726 "dif_type": 0, 00:37:31.726 "assigned_rate_limits": { 00:37:31.726 "rw_ios_per_sec": 0, 00:37:31.726 "rw_mbytes_per_sec": 0, 00:37:31.726 "r_mbytes_per_sec": 0, 00:37:31.726 "w_mbytes_per_sec": 0 00:37:31.726 }, 00:37:31.726 "claimed": true, 00:37:31.726 "claim_type": "exclusive_write", 00:37:31.726 "zoned": false, 00:37:31.726 "supported_io_types": { 00:37:31.726 "read": true, 00:37:31.726 "write": true, 00:37:31.726 "unmap": true, 00:37:31.726 "flush": true, 00:37:31.726 "reset": true, 00:37:31.726 "nvme_admin": false, 00:37:31.726 "nvme_io": false, 00:37:31.726 "nvme_io_md": false, 00:37:31.726 "write_zeroes": true, 00:37:31.726 "zcopy": true, 00:37:31.726 "get_zone_info": false, 00:37:31.726 "zone_management": false, 00:37:31.726 "zone_append": false, 00:37:31.726 "compare": false, 00:37:31.726 "compare_and_write": false, 00:37:31.726 "abort": true, 00:37:31.726 "seek_hole": false, 00:37:31.726 "seek_data": false, 00:37:31.726 "copy": true, 00:37:31.726 "nvme_iov_md": false 00:37:31.726 }, 00:37:31.726 "memory_domains": [ 00:37:31.726 { 00:37:31.726 "dma_device_id": "system", 00:37:31.726 "dma_device_type": 1 00:37:31.726 }, 00:37:31.726 { 00:37:31.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:31.726 "dma_device_type": 2 00:37:31.726 } 00:37:31.726 ], 00:37:31.726 "driver_specific": { 00:37:31.726 "passthru": { 00:37:31.726 "name": "pt2", 00:37:31.726 "base_bdev_name": "malloc2" 00:37:31.726 } 00:37:31.726 } 00:37:31.726 }' 00:37:31.726 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:31.726 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:31.726 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:37:31.726 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:31.726 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:31.726 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:31.726 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.726 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:31.984 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:37:31.984 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.984 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:31.984 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:31.984 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:31.984 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:37:32.243 [2024-07-13 11:49:06.857463] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:32.243 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 2b2f4274-720e-4ad8-a73d-bfd569c4f072 '!=' 2b2f4274-720e-4ad8-a73d-bfd569c4f072 ']' 00:37:32.243 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:37:32.243 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:32.243 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:37:32.243 11:49:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:32.502 [2024-07-13 11:49:07.101334] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:32.502 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.761 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:32.761 "name": "raid_bdev1", 00:37:32.761 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:32.761 "strip_size_kb": 0, 00:37:32.761 "state": "online", 00:37:32.761 "raid_level": "raid1", 00:37:32.761 "superblock": true, 00:37:32.761 "num_base_bdevs": 2, 00:37:32.761 "num_base_bdevs_discovered": 1, 00:37:32.761 "num_base_bdevs_operational": 1, 00:37:32.761 "base_bdevs_list": [ 00:37:32.761 { 00:37:32.761 "name": null, 00:37:32.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.761 "is_configured": false, 00:37:32.761 "data_offset": 256, 00:37:32.761 "data_size": 7936 00:37:32.761 }, 00:37:32.761 { 00:37:32.761 "name": "pt2", 00:37:32.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:32.761 "is_configured": true, 00:37:32.761 "data_offset": 256, 00:37:32.761 "data_size": 7936 00:37:32.761 } 00:37:32.761 ] 00:37:32.761 }' 00:37:32.761 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:32.761 11:49:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:33.329 11:49:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:33.587 [2024-07-13 11:49:08.209537] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:33.587 [2024-07-13 11:49:08.209682] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:33.588 [2024-07-13 11:49:08.209833] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:33.588 [2024-07-13 11:49:08.209990] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:33.588 [2024-07-13 11:49:08.210094] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:37:33.588 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.588 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:37:33.846 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:37:33.846 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:37:33.846 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:37:33.846 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:33.846 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:34.105 [2024-07-13 11:49:08.779156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:34.105 [2024-07-13 11:49:08.779353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:34.105 [2024-07-13 11:49:08.779410] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:37:34.105 [2024-07-13 11:49:08.779669] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:34.105 [2024-07-13 11:49:08.781617] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:34.105 [2024-07-13 11:49:08.781809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:34.105 [2024-07-13 11:49:08.782000] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:34.105 [2024-07-13 11:49:08.782144] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:34.105 [2024-07-13 11:49:08.782259] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:37:34.105 [2024-07-13 11:49:08.782375] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:34.105 [2024-07-13 11:49:08.782507] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:34.105 [2024-07-13 11:49:08.782690] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:37:34.105 [2024-07-13 11:49:08.782777] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:37:34.105 [2024-07-13 11:49:08.782962] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:34.105 pt2 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:34.105 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:34.364 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:34.364 "name": "raid_bdev1", 00:37:34.364 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:34.364 "strip_size_kb": 0, 00:37:34.364 "state": "online", 00:37:34.364 "raid_level": "raid1", 00:37:34.364 "superblock": true, 00:37:34.364 "num_base_bdevs": 2, 00:37:34.364 "num_base_bdevs_discovered": 1, 00:37:34.364 "num_base_bdevs_operational": 1, 00:37:34.364 "base_bdevs_list": [ 00:37:34.364 { 00:37:34.364 "name": null, 00:37:34.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:34.364 "is_configured": false, 00:37:34.364 "data_offset": 256, 00:37:34.364 "data_size": 7936 00:37:34.364 }, 00:37:34.364 { 00:37:34.364 "name": "pt2", 00:37:34.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:34.364 "is_configured": true, 00:37:34.364 "data_offset": 256, 00:37:34.364 "data_size": 7936 00:37:34.364 } 00:37:34.364 ] 00:37:34.364 }' 00:37:34.364 11:49:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:34.364 11:49:08 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:34.931 11:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:35.189 [2024-07-13 11:49:09.859424] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:35.189 [2024-07-13 11:49:09.859561] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:35.189 [2024-07-13 11:49:09.859722] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:35.189 [2024-07-13 11:49:09.859863] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:35.189 [2024-07-13 11:49:09.859966] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:37:35.189 11:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.189 11:49:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:37:35.447 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:37:35.447 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:37:35.447 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:37:35.447 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:35.705 [2024-07-13 11:49:10.335489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:35.705 [2024-07-13 11:49:10.335667] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:35.705 [2024-07-13 11:49:10.335732] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:35.705 [2024-07-13 11:49:10.335852] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:35.705 [2024-07-13 11:49:10.337501] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:35.705 [2024-07-13 11:49:10.337669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:35.705 [2024-07-13 11:49:10.337841] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:35.705 [2024-07-13 11:49:10.337989] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:35.705 [2024-07-13 11:49:10.338205] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:35.705 [2024-07-13 11:49:10.338295] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:35.705 [2024-07-13 11:49:10.338337] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:37:35.705 [2024-07-13 11:49:10.338473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:35.705 [2024-07-13 11:49:10.338579] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:37:35.705 [2024-07-13 11:49:10.338691] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:35.705 [2024-07-13 11:49:10.338896] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:35.705 [2024-07-13 11:49:10.339104] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:37:35.705 [2024-07-13 11:49:10.339193] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:37:35.705 [2024-07-13 11:49:10.339363] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:35.705 pt1 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.705 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.964 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:35.964 "name": "raid_bdev1", 00:37:35.964 "uuid": "2b2f4274-720e-4ad8-a73d-bfd569c4f072", 00:37:35.964 "strip_size_kb": 0, 00:37:35.964 "state": "online", 00:37:35.964 "raid_level": "raid1", 00:37:35.964 "superblock": true, 00:37:35.964 "num_base_bdevs": 2, 00:37:35.964 "num_base_bdevs_discovered": 1, 00:37:35.964 "num_base_bdevs_operational": 1, 00:37:35.964 "base_bdevs_list": [ 00:37:35.964 { 00:37:35.964 "name": null, 00:37:35.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:35.964 "is_configured": false, 00:37:35.964 "data_offset": 256, 00:37:35.964 "data_size": 7936 00:37:35.964 }, 00:37:35.964 { 00:37:35.964 "name": "pt2", 00:37:35.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:35.964 "is_configured": true, 00:37:35.964 "data_offset": 256, 00:37:35.964 "data_size": 7936 00:37:35.964 } 00:37:35.964 ] 00:37:35.964 }' 00:37:35.964 11:49:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:35.964 11:49:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:36.530 11:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:37:36.530 11:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:36.787 11:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:37:36.787 11:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:36.787 11:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:37:37.046 [2024-07-13 11:49:11.631497] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2b2f4274-720e-4ad8-a73d-bfd569c4f072 '!=' 2b2f4274-720e-4ad8-a73d-bfd569c4f072 ']' 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 162552 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 162552 ']' 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 162552 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 162552 00:37:37.046 killing process with pid 162552 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 162552' 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 162552 00:37:37.046 11:49:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 162552 00:37:37.046 [2024-07-13 11:49:11.672655] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:37.046 [2024-07-13 11:49:11.672728] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:37.046 [2024-07-13 11:49:11.672780] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:37.046 [2024-07-13 11:49:11.672791] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:37:37.304 [2024-07-13 11:49:11.820889] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:38.238 ************************************ 00:37:38.238 END TEST raid_superblock_test_md_separate 00:37:38.238 ************************************ 00:37:38.238 11:49:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:37:38.238 00:37:38.238 real 0m16.209s 00:37:38.238 user 0m29.791s 00:37:38.238 sys 0m1.836s 00:37:38.238 11:49:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:38.238 11:49:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:38.238 11:49:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:37:38.238 11:49:12 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:37:38.238 11:49:12 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:37:38.238 11:49:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:37:38.238 11:49:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:38.238 11:49:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:38.238 ************************************ 00:37:38.238 START TEST raid_rebuild_test_sb_md_separate 00:37:38.238 ************************************ 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=163103 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 163103 /var/tmp/spdk-raid.sock 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 163103 ']' 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:38.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:38.238 11:49:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:38.496 [2024-07-13 11:49:12.998307] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:37:38.496 [2024-07-13 11:49:12.998693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163103 ] 00:37:38.496 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:38.496 Zero copy mechanism will not be used. 00:37:38.496 [2024-07-13 11:49:13.170021] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.754 [2024-07-13 11:49:13.399230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:39.012 [2024-07-13 11:49:13.589487] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:39.270 11:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:39.270 11:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:37:39.270 11:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:39.270 11:49:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:37:39.527 BaseBdev1_malloc 00:37:39.527 11:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:39.784 [2024-07-13 11:49:14.423785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:39.784 [2024-07-13 11:49:14.424149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:39.784 [2024-07-13 11:49:14.424221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:37:39.784 [2024-07-13 11:49:14.424496] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:39.784 [2024-07-13 11:49:14.426293] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:39.784 [2024-07-13 11:49:14.426452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:39.784 BaseBdev1 00:37:39.784 11:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:39.784 11:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:37:40.041 BaseBdev2_malloc 00:37:40.041 11:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:40.298 [2024-07-13 11:49:14.985086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:40.298 [2024-07-13 11:49:14.985397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:40.298 [2024-07-13 11:49:14.985467] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:37:40.298 [2024-07-13 11:49:14.985743] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:40.298 [2024-07-13 11:49:14.987455] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:40.299 [2024-07-13 11:49:14.987617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:40.299 BaseBdev2 00:37:40.299 11:49:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:37:40.557 spare_malloc 00:37:40.557 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:40.815 spare_delay 00:37:40.815 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:41.073 [2024-07-13 11:49:15.658886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:41.073 [2024-07-13 11:49:15.659119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:41.073 [2024-07-13 11:49:15.659189] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:37:41.073 [2024-07-13 11:49:15.659507] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:41.073 [2024-07-13 11:49:15.661660] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:41.073 [2024-07-13 11:49:15.661825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:41.073 spare 00:37:41.073 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:37:41.331 [2024-07-13 11:49:15.842969] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:41.331 [2024-07-13 11:49:15.845041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:41.331 [2024-07-13 11:49:15.845365] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:37:41.331 [2024-07-13 11:49:15.845489] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:41.331 [2024-07-13 11:49:15.845717] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:37:41.331 [2024-07-13 11:49:15.845917] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:37:41.331 [2024-07-13 11:49:15.846005] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:37:41.331 [2024-07-13 11:49:15.846172] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:41.331 11:49:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:41.331 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:41.331 "name": "raid_bdev1", 00:37:41.331 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:41.331 "strip_size_kb": 0, 00:37:41.331 "state": "online", 00:37:41.331 "raid_level": "raid1", 00:37:41.331 "superblock": true, 00:37:41.331 "num_base_bdevs": 2, 00:37:41.331 "num_base_bdevs_discovered": 2, 00:37:41.331 "num_base_bdevs_operational": 2, 00:37:41.331 "base_bdevs_list": [ 00:37:41.331 { 00:37:41.331 "name": "BaseBdev1", 00:37:41.331 "uuid": "51c3bd2b-c42d-51d3-a53b-27ed1e267f32", 00:37:41.331 "is_configured": true, 00:37:41.331 "data_offset": 256, 00:37:41.331 "data_size": 7936 00:37:41.331 }, 00:37:41.331 { 00:37:41.331 "name": "BaseBdev2", 00:37:41.331 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:41.332 "is_configured": true, 00:37:41.332 "data_offset": 256, 00:37:41.332 "data_size": 7936 00:37:41.332 } 00:37:41.332 ] 00:37:41.332 }' 00:37:41.332 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:41.332 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:41.897 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:41.897 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:37:42.155 [2024-07-13 11:49:16.791357] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:42.155 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:37:42.155 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:42.155 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:42.412 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:42.413 11:49:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:37:42.671 [2024-07-13 11:49:17.243261] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:37:42.671 /dev/nbd0 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:42.671 1+0 records in 00:37:42.671 1+0 records out 00:37:42.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386258 s, 10.6 MB/s 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:37:42.671 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:37:43.237 7936+0 records in 00:37:43.238 7936+0 records out 00:37:43.238 32505856 bytes (33 MB, 31 MiB) copied, 0.682065 s, 47.7 MB/s 00:37:43.238 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:37:43.238 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:43.238 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:43.238 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:43.238 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:37:43.238 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:43.238 11:49:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:43.496 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:43.496 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:43.496 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:43.496 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:43.496 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:43.496 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:43.496 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:37:43.496 [2024-07-13 11:49:18.249095] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:43.753 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:37:43.753 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:43.753 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:43.753 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:37:43.753 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:37:43.754 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:44.011 [2024-07-13 11:49:18.604803] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:44.011 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:44.269 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:44.269 "name": "raid_bdev1", 00:37:44.269 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:44.269 "strip_size_kb": 0, 00:37:44.269 "state": "online", 00:37:44.269 "raid_level": "raid1", 00:37:44.269 "superblock": true, 00:37:44.269 "num_base_bdevs": 2, 00:37:44.269 "num_base_bdevs_discovered": 1, 00:37:44.269 "num_base_bdevs_operational": 1, 00:37:44.269 "base_bdevs_list": [ 00:37:44.269 { 00:37:44.269 "name": null, 00:37:44.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:44.269 "is_configured": false, 00:37:44.269 "data_offset": 256, 00:37:44.269 "data_size": 7936 00:37:44.269 }, 00:37:44.269 { 00:37:44.269 "name": "BaseBdev2", 00:37:44.269 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:44.269 "is_configured": true, 00:37:44.269 "data_offset": 256, 00:37:44.269 "data_size": 7936 00:37:44.269 } 00:37:44.269 ] 00:37:44.269 }' 00:37:44.269 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:44.269 11:49:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:44.833 11:49:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:45.091 [2024-07-13 11:49:19.769006] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:45.091 [2024-07-13 11:49:19.779847] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ffd0 00:37:45.091 [2024-07-13 11:49:19.781885] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:45.091 11:49:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:37:46.463 11:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:46.463 11:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:46.463 11:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:46.463 11:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:46.463 11:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:46.463 11:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:46.463 11:49:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:46.463 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:46.463 "name": "raid_bdev1", 00:37:46.463 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:46.463 "strip_size_kb": 0, 00:37:46.463 "state": "online", 00:37:46.463 "raid_level": "raid1", 00:37:46.463 "superblock": true, 00:37:46.463 "num_base_bdevs": 2, 00:37:46.463 "num_base_bdevs_discovered": 2, 00:37:46.463 "num_base_bdevs_operational": 2, 00:37:46.463 "process": { 00:37:46.463 "type": "rebuild", 00:37:46.463 "target": "spare", 00:37:46.463 "progress": { 00:37:46.463 "blocks": 3072, 00:37:46.463 "percent": 38 00:37:46.463 } 00:37:46.463 }, 00:37:46.463 "base_bdevs_list": [ 00:37:46.463 { 00:37:46.463 "name": "spare", 00:37:46.463 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:37:46.463 "is_configured": true, 00:37:46.463 "data_offset": 256, 00:37:46.463 "data_size": 7936 00:37:46.463 }, 00:37:46.463 { 00:37:46.463 "name": "BaseBdev2", 00:37:46.463 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:46.463 "is_configured": true, 00:37:46.463 "data_offset": 256, 00:37:46.463 "data_size": 7936 00:37:46.463 } 00:37:46.463 ] 00:37:46.463 }' 00:37:46.463 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:46.463 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:46.463 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:46.463 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:46.463 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:46.721 [2024-07-13 11:49:21.356753] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:46.721 [2024-07-13 11:49:21.392196] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:46.721 [2024-07-13 11:49:21.392402] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:46.721 [2024-07-13 11:49:21.392522] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:46.721 [2024-07-13 11:49:21.392621] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:46.721 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:46.980 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:46.980 "name": "raid_bdev1", 00:37:46.980 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:46.980 "strip_size_kb": 0, 00:37:46.980 "state": "online", 00:37:46.980 "raid_level": "raid1", 00:37:46.980 "superblock": true, 00:37:46.980 "num_base_bdevs": 2, 00:37:46.980 "num_base_bdevs_discovered": 1, 00:37:46.980 "num_base_bdevs_operational": 1, 00:37:46.980 "base_bdevs_list": [ 00:37:46.980 { 00:37:46.980 "name": null, 00:37:46.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:46.980 "is_configured": false, 00:37:46.980 "data_offset": 256, 00:37:46.980 "data_size": 7936 00:37:46.980 }, 00:37:46.980 { 00:37:46.980 "name": "BaseBdev2", 00:37:46.980 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:46.980 "is_configured": true, 00:37:46.980 "data_offset": 256, 00:37:46.980 "data_size": 7936 00:37:46.980 } 00:37:46.980 ] 00:37:46.980 }' 00:37:46.980 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:46.980 11:49:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:47.546 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:47.546 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:47.546 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:47.546 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:47.546 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:47.546 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:47.546 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:47.805 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:47.805 "name": "raid_bdev1", 00:37:47.805 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:47.805 "strip_size_kb": 0, 00:37:47.805 "state": "online", 00:37:47.805 "raid_level": "raid1", 00:37:47.805 "superblock": true, 00:37:47.805 "num_base_bdevs": 2, 00:37:47.805 "num_base_bdevs_discovered": 1, 00:37:47.805 "num_base_bdevs_operational": 1, 00:37:47.805 "base_bdevs_list": [ 00:37:47.805 { 00:37:47.805 "name": null, 00:37:47.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:47.805 "is_configured": false, 00:37:47.805 "data_offset": 256, 00:37:47.805 "data_size": 7936 00:37:47.805 }, 00:37:47.805 { 00:37:47.805 "name": "BaseBdev2", 00:37:47.805 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:47.805 "is_configured": true, 00:37:47.805 "data_offset": 256, 00:37:47.805 "data_size": 7936 00:37:47.805 } 00:37:47.805 ] 00:37:47.805 }' 00:37:47.805 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:47.805 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:47.805 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:47.805 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:47.805 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:48.063 [2024-07-13 11:49:22.706653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:48.063 [2024-07-13 11:49:22.716496] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d0c0 00:37:48.063 [2024-07-13 11:49:22.718497] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:48.063 11:49:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:49.009 11:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:49.009 11:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:49.009 11:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:49.009 11:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:49.009 11:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:49.009 11:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:49.009 11:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:49.267 11:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:49.267 "name": "raid_bdev1", 00:37:49.267 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:49.267 "strip_size_kb": 0, 00:37:49.267 "state": "online", 00:37:49.267 "raid_level": "raid1", 00:37:49.267 "superblock": true, 00:37:49.267 "num_base_bdevs": 2, 00:37:49.267 "num_base_bdevs_discovered": 2, 00:37:49.267 "num_base_bdevs_operational": 2, 00:37:49.267 "process": { 00:37:49.267 "type": "rebuild", 00:37:49.267 "target": "spare", 00:37:49.267 "progress": { 00:37:49.267 "blocks": 3072, 00:37:49.267 "percent": 38 00:37:49.267 } 00:37:49.267 }, 00:37:49.267 "base_bdevs_list": [ 00:37:49.267 { 00:37:49.267 "name": "spare", 00:37:49.267 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:37:49.267 "is_configured": true, 00:37:49.267 "data_offset": 256, 00:37:49.267 "data_size": 7936 00:37:49.267 }, 00:37:49.267 { 00:37:49.267 "name": "BaseBdev2", 00:37:49.267 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:49.267 "is_configured": true, 00:37:49.267 "data_offset": 256, 00:37:49.267 "data_size": 7936 00:37:49.267 } 00:37:49.267 ] 00:37:49.267 }' 00:37:49.267 11:49:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:37:49.525 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1415 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:49.525 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:49.526 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:49.526 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:49.526 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:49.526 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:49.526 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:49.526 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:49.784 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:49.784 "name": "raid_bdev1", 00:37:49.784 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:49.784 "strip_size_kb": 0, 00:37:49.784 "state": "online", 00:37:49.784 "raid_level": "raid1", 00:37:49.784 "superblock": true, 00:37:49.784 "num_base_bdevs": 2, 00:37:49.784 "num_base_bdevs_discovered": 2, 00:37:49.784 "num_base_bdevs_operational": 2, 00:37:49.784 "process": { 00:37:49.784 "type": "rebuild", 00:37:49.784 "target": "spare", 00:37:49.784 "progress": { 00:37:49.784 "blocks": 3840, 00:37:49.784 "percent": 48 00:37:49.784 } 00:37:49.784 }, 00:37:49.784 "base_bdevs_list": [ 00:37:49.784 { 00:37:49.784 "name": "spare", 00:37:49.784 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:37:49.784 "is_configured": true, 00:37:49.784 "data_offset": 256, 00:37:49.784 "data_size": 7936 00:37:49.784 }, 00:37:49.784 { 00:37:49.784 "name": "BaseBdev2", 00:37:49.784 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:49.784 "is_configured": true, 00:37:49.784 "data_offset": 256, 00:37:49.784 "data_size": 7936 00:37:49.784 } 00:37:49.784 ] 00:37:49.784 }' 00:37:49.784 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:49.784 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:49.784 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:49.784 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:49.784 11:49:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:50.719 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:50.719 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:50.719 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:50.719 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:50.719 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:50.719 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:50.719 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:50.719 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:50.977 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:50.977 "name": "raid_bdev1", 00:37:50.977 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:50.977 "strip_size_kb": 0, 00:37:50.977 "state": "online", 00:37:50.977 "raid_level": "raid1", 00:37:50.977 "superblock": true, 00:37:50.977 "num_base_bdevs": 2, 00:37:50.977 "num_base_bdevs_discovered": 2, 00:37:50.977 "num_base_bdevs_operational": 2, 00:37:50.977 "process": { 00:37:50.977 "type": "rebuild", 00:37:50.977 "target": "spare", 00:37:50.977 "progress": { 00:37:50.977 "blocks": 7424, 00:37:50.977 "percent": 93 00:37:50.977 } 00:37:50.977 }, 00:37:50.977 "base_bdevs_list": [ 00:37:50.977 { 00:37:50.977 "name": "spare", 00:37:50.977 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:37:50.977 "is_configured": true, 00:37:50.977 "data_offset": 256, 00:37:50.977 "data_size": 7936 00:37:50.977 }, 00:37:50.977 { 00:37:50.977 "name": "BaseBdev2", 00:37:50.977 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:50.977 "is_configured": true, 00:37:50.977 "data_offset": 256, 00:37:50.977 "data_size": 7936 00:37:50.977 } 00:37:50.977 ] 00:37:50.977 }' 00:37:50.977 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:51.237 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:51.237 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:51.237 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:51.237 11:49:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:51.237 [2024-07-13 11:49:25.837728] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:51.237 [2024-07-13 11:49:25.837930] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:51.237 [2024-07-13 11:49:25.838174] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:52.176 11:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:52.176 11:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:52.176 11:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:52.176 11:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:52.176 11:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:52.176 11:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:52.176 11:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:52.176 11:49:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:52.435 "name": "raid_bdev1", 00:37:52.435 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:52.435 "strip_size_kb": 0, 00:37:52.435 "state": "online", 00:37:52.435 "raid_level": "raid1", 00:37:52.435 "superblock": true, 00:37:52.435 "num_base_bdevs": 2, 00:37:52.435 "num_base_bdevs_discovered": 2, 00:37:52.435 "num_base_bdevs_operational": 2, 00:37:52.435 "base_bdevs_list": [ 00:37:52.435 { 00:37:52.435 "name": "spare", 00:37:52.435 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:37:52.435 "is_configured": true, 00:37:52.435 "data_offset": 256, 00:37:52.435 "data_size": 7936 00:37:52.435 }, 00:37:52.435 { 00:37:52.435 "name": "BaseBdev2", 00:37:52.435 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:52.435 "is_configured": true, 00:37:52.435 "data_offset": 256, 00:37:52.435 "data_size": 7936 00:37:52.435 } 00:37:52.435 ] 00:37:52.435 }' 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:52.435 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:52.694 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:52.694 "name": "raid_bdev1", 00:37:52.694 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:52.694 "strip_size_kb": 0, 00:37:52.694 "state": "online", 00:37:52.694 "raid_level": "raid1", 00:37:52.694 "superblock": true, 00:37:52.694 "num_base_bdevs": 2, 00:37:52.694 "num_base_bdevs_discovered": 2, 00:37:52.694 "num_base_bdevs_operational": 2, 00:37:52.694 "base_bdevs_list": [ 00:37:52.694 { 00:37:52.694 "name": "spare", 00:37:52.694 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:37:52.694 "is_configured": true, 00:37:52.694 "data_offset": 256, 00:37:52.694 "data_size": 7936 00:37:52.694 }, 00:37:52.694 { 00:37:52.694 "name": "BaseBdev2", 00:37:52.694 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:52.694 "is_configured": true, 00:37:52.694 "data_offset": 256, 00:37:52.694 "data_size": 7936 00:37:52.694 } 00:37:52.694 ] 00:37:52.694 }' 00:37:52.694 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:52.694 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:52.694 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:52.953 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:53.212 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:53.212 "name": "raid_bdev1", 00:37:53.212 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:53.212 "strip_size_kb": 0, 00:37:53.212 "state": "online", 00:37:53.212 "raid_level": "raid1", 00:37:53.212 "superblock": true, 00:37:53.212 "num_base_bdevs": 2, 00:37:53.212 "num_base_bdevs_discovered": 2, 00:37:53.212 "num_base_bdevs_operational": 2, 00:37:53.212 "base_bdevs_list": [ 00:37:53.212 { 00:37:53.212 "name": "spare", 00:37:53.212 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:37:53.212 "is_configured": true, 00:37:53.212 "data_offset": 256, 00:37:53.212 "data_size": 7936 00:37:53.212 }, 00:37:53.212 { 00:37:53.212 "name": "BaseBdev2", 00:37:53.212 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:53.212 "is_configured": true, 00:37:53.212 "data_offset": 256, 00:37:53.212 "data_size": 7936 00:37:53.212 } 00:37:53.212 ] 00:37:53.212 }' 00:37:53.212 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:53.212 11:49:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:53.780 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:54.038 [2024-07-13 11:49:28.720847] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:54.038 [2024-07-13 11:49:28.720989] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:54.038 [2024-07-13 11:49:28.721189] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:54.038 [2024-07-13 11:49:28.721355] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:54.038 [2024-07-13 11:49:28.721462] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:37:54.038 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:54.038 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:54.298 11:49:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:54.562 /dev/nbd0 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:54.562 1+0 records in 00:37:54.562 1+0 records out 00:37:54.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000534892 s, 7.7 MB/s 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:54.562 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:37:54.821 /dev/nbd1 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:54.821 1+0 records in 00:37:54.821 1+0 records out 00:37:54.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519571 s, 7.9 MB/s 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:37:54.821 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:55.093 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:37:55.374 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:55.374 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:55.374 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:55.374 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:55.374 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:55.374 11:49:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:55.374 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:37:55.374 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:37:55.374 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:55.374 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:55.374 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:37:55.374 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:37:55.374 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:55.374 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:37:55.639 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:55.639 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:55.639 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:55.639 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:55.639 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:55.639 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:55.639 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:37:55.898 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:37:55.898 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:55.898 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:55.898 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:37:55.898 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:37:55.898 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:37:55.898 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:55.898 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:56.157 [2024-07-13 11:49:30.903677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:56.157 [2024-07-13 11:49:30.903885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:56.157 [2024-07-13 11:49:30.903979] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:56.157 [2024-07-13 11:49:30.904223] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:56.157 [2024-07-13 11:49:30.906405] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:56.157 [2024-07-13 11:49:30.906584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:56.157 [2024-07-13 11:49:30.906889] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:56.157 [2024-07-13 11:49:30.907046] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:56.157 [2024-07-13 11:49:30.907346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:56.157 spare 00:37:56.415 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:56.415 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:56.415 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:56.415 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:56.415 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:56.415 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:56.416 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:56.416 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:56.416 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:56.416 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:56.416 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:56.416 11:49:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:56.416 [2024-07-13 11:49:31.007549] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:37:56.416 [2024-07-13 11:49:31.007696] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:56.416 [2024-07-13 11:49:31.007855] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1a80 00:37:56.416 [2024-07-13 11:49:31.008122] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:37:56.416 [2024-07-13 11:49:31.008210] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:37:56.416 [2024-07-13 11:49:31.008404] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:56.416 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:56.416 "name": "raid_bdev1", 00:37:56.416 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:56.416 "strip_size_kb": 0, 00:37:56.416 "state": "online", 00:37:56.416 "raid_level": "raid1", 00:37:56.416 "superblock": true, 00:37:56.416 "num_base_bdevs": 2, 00:37:56.416 "num_base_bdevs_discovered": 2, 00:37:56.416 "num_base_bdevs_operational": 2, 00:37:56.416 "base_bdevs_list": [ 00:37:56.416 { 00:37:56.416 "name": "spare", 00:37:56.416 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:37:56.416 "is_configured": true, 00:37:56.416 "data_offset": 256, 00:37:56.416 "data_size": 7936 00:37:56.416 }, 00:37:56.416 { 00:37:56.416 "name": "BaseBdev2", 00:37:56.416 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:56.416 "is_configured": true, 00:37:56.416 "data_offset": 256, 00:37:56.416 "data_size": 7936 00:37:56.416 } 00:37:56.416 ] 00:37:56.416 }' 00:37:56.416 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:56.416 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:56.982 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:56.982 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:56.982 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:56.982 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:56.982 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:56.982 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:57.241 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:57.241 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:57.241 "name": "raid_bdev1", 00:37:57.241 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:57.241 "strip_size_kb": 0, 00:37:57.241 "state": "online", 00:37:57.241 "raid_level": "raid1", 00:37:57.241 "superblock": true, 00:37:57.241 "num_base_bdevs": 2, 00:37:57.241 "num_base_bdevs_discovered": 2, 00:37:57.241 "num_base_bdevs_operational": 2, 00:37:57.241 "base_bdevs_list": [ 00:37:57.241 { 00:37:57.241 "name": "spare", 00:37:57.241 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:37:57.241 "is_configured": true, 00:37:57.241 "data_offset": 256, 00:37:57.241 "data_size": 7936 00:37:57.241 }, 00:37:57.241 { 00:37:57.241 "name": "BaseBdev2", 00:37:57.241 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:57.241 "is_configured": true, 00:37:57.241 "data_offset": 256, 00:37:57.241 "data_size": 7936 00:37:57.241 } 00:37:57.241 ] 00:37:57.241 }' 00:37:57.241 11:49:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:57.499 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:57.499 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:57.499 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:57.499 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:57.499 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:57.758 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:37:57.758 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:58.017 [2024-07-13 11:49:32.540731] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:58.017 "name": "raid_bdev1", 00:37:58.017 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:37:58.017 "strip_size_kb": 0, 00:37:58.017 "state": "online", 00:37:58.017 "raid_level": "raid1", 00:37:58.017 "superblock": true, 00:37:58.017 "num_base_bdevs": 2, 00:37:58.017 "num_base_bdevs_discovered": 1, 00:37:58.017 "num_base_bdevs_operational": 1, 00:37:58.017 "base_bdevs_list": [ 00:37:58.017 { 00:37:58.017 "name": null, 00:37:58.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:58.017 "is_configured": false, 00:37:58.017 "data_offset": 256, 00:37:58.017 "data_size": 7936 00:37:58.017 }, 00:37:58.017 { 00:37:58.017 "name": "BaseBdev2", 00:37:58.017 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:37:58.017 "is_configured": true, 00:37:58.017 "data_offset": 256, 00:37:58.017 "data_size": 7936 00:37:58.017 } 00:37:58.017 ] 00:37:58.017 }' 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:58.017 11:49:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:58.965 11:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:58.965 [2024-07-13 11:49:33.688925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:58.965 [2024-07-13 11:49:33.689182] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:58.965 [2024-07-13 11:49:33.689311] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:58.965 [2024-07-13 11:49:33.689394] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:58.965 [2024-07-13 11:49:33.699626] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:37:58.965 [2024-07-13 11:49:33.701602] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:58.965 11:49:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:38:00.338 11:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:00.338 11:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:00.338 11:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:00.338 11:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:00.338 11:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:00.338 11:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:00.338 11:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:00.338 11:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:00.338 "name": "raid_bdev1", 00:38:00.338 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:38:00.338 "strip_size_kb": 0, 00:38:00.338 "state": "online", 00:38:00.338 "raid_level": "raid1", 00:38:00.338 "superblock": true, 00:38:00.338 "num_base_bdevs": 2, 00:38:00.338 "num_base_bdevs_discovered": 2, 00:38:00.338 "num_base_bdevs_operational": 2, 00:38:00.338 "process": { 00:38:00.338 "type": "rebuild", 00:38:00.338 "target": "spare", 00:38:00.338 "progress": { 00:38:00.338 "blocks": 3072, 00:38:00.338 "percent": 38 00:38:00.338 } 00:38:00.338 }, 00:38:00.338 "base_bdevs_list": [ 00:38:00.338 { 00:38:00.338 "name": "spare", 00:38:00.338 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:38:00.338 "is_configured": true, 00:38:00.338 "data_offset": 256, 00:38:00.338 "data_size": 7936 00:38:00.338 }, 00:38:00.338 { 00:38:00.338 "name": "BaseBdev2", 00:38:00.338 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:38:00.338 "is_configured": true, 00:38:00.338 "data_offset": 256, 00:38:00.338 "data_size": 7936 00:38:00.338 } 00:38:00.338 ] 00:38:00.338 }' 00:38:00.338 11:49:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:00.338 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:00.338 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:00.596 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:00.596 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:00.596 [2024-07-13 11:49:35.324374] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:00.854 [2024-07-13 11:49:35.412340] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:00.854 [2024-07-13 11:49:35.412528] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:00.854 [2024-07-13 11:49:35.412578] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:00.854 [2024-07-13 11:49:35.412695] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:00.854 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:01.112 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:01.112 "name": "raid_bdev1", 00:38:01.112 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:38:01.112 "strip_size_kb": 0, 00:38:01.112 "state": "online", 00:38:01.112 "raid_level": "raid1", 00:38:01.112 "superblock": true, 00:38:01.112 "num_base_bdevs": 2, 00:38:01.112 "num_base_bdevs_discovered": 1, 00:38:01.112 "num_base_bdevs_operational": 1, 00:38:01.112 "base_bdevs_list": [ 00:38:01.112 { 00:38:01.112 "name": null, 00:38:01.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:01.112 "is_configured": false, 00:38:01.112 "data_offset": 256, 00:38:01.112 "data_size": 7936 00:38:01.112 }, 00:38:01.112 { 00:38:01.112 "name": "BaseBdev2", 00:38:01.112 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:38:01.112 "is_configured": true, 00:38:01.112 "data_offset": 256, 00:38:01.112 "data_size": 7936 00:38:01.112 } 00:38:01.112 ] 00:38:01.112 }' 00:38:01.112 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:01.112 11:49:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:01.677 11:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:01.935 [2024-07-13 11:49:36.506898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:01.935 [2024-07-13 11:49:36.507086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:01.935 [2024-07-13 11:49:36.507153] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:38:01.935 [2024-07-13 11:49:36.507444] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:01.935 [2024-07-13 11:49:36.507815] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:01.935 [2024-07-13 11:49:36.507985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:01.935 [2024-07-13 11:49:36.508183] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:01.935 [2024-07-13 11:49:36.508314] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:01.935 [2024-07-13 11:49:36.508404] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:01.935 [2024-07-13 11:49:36.508579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:01.935 [2024-07-13 11:49:36.518313] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:38:01.935 spare 00:38:01.935 [2024-07-13 11:49:36.520094] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:01.935 11:49:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:38:02.867 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:02.867 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:02.867 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:02.867 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:02.867 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:02.867 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:02.867 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:03.125 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:03.125 "name": "raid_bdev1", 00:38:03.125 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:38:03.125 "strip_size_kb": 0, 00:38:03.125 "state": "online", 00:38:03.125 "raid_level": "raid1", 00:38:03.125 "superblock": true, 00:38:03.125 "num_base_bdevs": 2, 00:38:03.125 "num_base_bdevs_discovered": 2, 00:38:03.125 "num_base_bdevs_operational": 2, 00:38:03.125 "process": { 00:38:03.125 "type": "rebuild", 00:38:03.125 "target": "spare", 00:38:03.125 "progress": { 00:38:03.125 "blocks": 3072, 00:38:03.125 "percent": 38 00:38:03.125 } 00:38:03.125 }, 00:38:03.125 "base_bdevs_list": [ 00:38:03.125 { 00:38:03.125 "name": "spare", 00:38:03.125 "uuid": "3fd62d77-2d6b-54cd-bed9-e00af43682a0", 00:38:03.125 "is_configured": true, 00:38:03.125 "data_offset": 256, 00:38:03.125 "data_size": 7936 00:38:03.125 }, 00:38:03.125 { 00:38:03.125 "name": "BaseBdev2", 00:38:03.125 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:38:03.125 "is_configured": true, 00:38:03.125 "data_offset": 256, 00:38:03.125 "data_size": 7936 00:38:03.125 } 00:38:03.125 ] 00:38:03.125 }' 00:38:03.125 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:03.125 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:03.125 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:03.384 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:03.384 11:49:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:03.642 [2024-07-13 11:49:38.167577] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:03.642 [2024-07-13 11:49:38.230257] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:03.642 [2024-07-13 11:49:38.230446] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:03.642 [2024-07-13 11:49:38.230495] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:03.642 [2024-07-13 11:49:38.230636] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:03.642 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:03.900 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:03.900 "name": "raid_bdev1", 00:38:03.900 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:38:03.900 "strip_size_kb": 0, 00:38:03.900 "state": "online", 00:38:03.900 "raid_level": "raid1", 00:38:03.900 "superblock": true, 00:38:03.900 "num_base_bdevs": 2, 00:38:03.900 "num_base_bdevs_discovered": 1, 00:38:03.900 "num_base_bdevs_operational": 1, 00:38:03.900 "base_bdevs_list": [ 00:38:03.900 { 00:38:03.900 "name": null, 00:38:03.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:03.900 "is_configured": false, 00:38:03.900 "data_offset": 256, 00:38:03.900 "data_size": 7936 00:38:03.900 }, 00:38:03.900 { 00:38:03.900 "name": "BaseBdev2", 00:38:03.900 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:38:03.900 "is_configured": true, 00:38:03.900 "data_offset": 256, 00:38:03.900 "data_size": 7936 00:38:03.900 } 00:38:03.900 ] 00:38:03.900 }' 00:38:03.900 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:03.900 11:49:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:04.466 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:04.466 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:04.466 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:04.466 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:04.466 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:04.466 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:04.466 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:04.724 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:04.724 "name": "raid_bdev1", 00:38:04.724 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:38:04.724 "strip_size_kb": 0, 00:38:04.724 "state": "online", 00:38:04.724 "raid_level": "raid1", 00:38:04.724 "superblock": true, 00:38:04.724 "num_base_bdevs": 2, 00:38:04.724 "num_base_bdevs_discovered": 1, 00:38:04.724 "num_base_bdevs_operational": 1, 00:38:04.724 "base_bdevs_list": [ 00:38:04.724 { 00:38:04.724 "name": null, 00:38:04.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:04.724 "is_configured": false, 00:38:04.724 "data_offset": 256, 00:38:04.724 "data_size": 7936 00:38:04.724 }, 00:38:04.724 { 00:38:04.724 "name": "BaseBdev2", 00:38:04.724 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:38:04.724 "is_configured": true, 00:38:04.724 "data_offset": 256, 00:38:04.724 "data_size": 7936 00:38:04.724 } 00:38:04.724 ] 00:38:04.724 }' 00:38:04.724 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:04.724 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:04.724 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:04.982 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:04.982 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:38:05.240 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:05.240 [2024-07-13 11:49:39.924455] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:05.240 [2024-07-13 11:49:39.924653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:05.240 [2024-07-13 11:49:39.924725] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:38:05.240 [2024-07-13 11:49:39.925015] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:05.240 [2024-07-13 11:49:39.925287] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:05.240 [2024-07-13 11:49:39.925451] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:05.240 [2024-07-13 11:49:39.925667] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:38:05.240 [2024-07-13 11:49:39.925778] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:05.241 [2024-07-13 11:49:39.925871] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:05.241 BaseBdev1 00:38:05.241 11:49:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:06.616 11:49:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:06.616 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:06.616 "name": "raid_bdev1", 00:38:06.616 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:38:06.616 "strip_size_kb": 0, 00:38:06.616 "state": "online", 00:38:06.616 "raid_level": "raid1", 00:38:06.616 "superblock": true, 00:38:06.616 "num_base_bdevs": 2, 00:38:06.616 "num_base_bdevs_discovered": 1, 00:38:06.616 "num_base_bdevs_operational": 1, 00:38:06.616 "base_bdevs_list": [ 00:38:06.616 { 00:38:06.616 "name": null, 00:38:06.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:06.616 "is_configured": false, 00:38:06.616 "data_offset": 256, 00:38:06.616 "data_size": 7936 00:38:06.616 }, 00:38:06.616 { 00:38:06.616 "name": "BaseBdev2", 00:38:06.616 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:38:06.616 "is_configured": true, 00:38:06.616 "data_offset": 256, 00:38:06.616 "data_size": 7936 00:38:06.616 } 00:38:06.616 ] 00:38:06.616 }' 00:38:06.616 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:06.616 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:07.183 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:07.183 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:07.183 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:07.183 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:07.183 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:07.183 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:07.183 11:49:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:07.442 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:07.442 "name": "raid_bdev1", 00:38:07.442 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:38:07.442 "strip_size_kb": 0, 00:38:07.442 "state": "online", 00:38:07.442 "raid_level": "raid1", 00:38:07.442 "superblock": true, 00:38:07.442 "num_base_bdevs": 2, 00:38:07.442 "num_base_bdevs_discovered": 1, 00:38:07.442 "num_base_bdevs_operational": 1, 00:38:07.442 "base_bdevs_list": [ 00:38:07.442 { 00:38:07.442 "name": null, 00:38:07.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:07.442 "is_configured": false, 00:38:07.442 "data_offset": 256, 00:38:07.442 "data_size": 7936 00:38:07.442 }, 00:38:07.442 { 00:38:07.442 "name": "BaseBdev2", 00:38:07.442 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:38:07.442 "is_configured": true, 00:38:07.442 "data_offset": 256, 00:38:07.442 "data_size": 7936 00:38:07.442 } 00:38:07.442 ] 00:38:07.442 }' 00:38:07.442 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:07.442 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:07.442 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:38:07.701 [2024-07-13 11:49:42.396870] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:07.701 [2024-07-13 11:49:42.397093] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:38:07.701 [2024-07-13 11:49:42.397230] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:38:07.701 request: 00:38:07.701 { 00:38:07.701 "base_bdev": "BaseBdev1", 00:38:07.701 "raid_bdev": "raid_bdev1", 00:38:07.701 "method": "bdev_raid_add_base_bdev", 00:38:07.701 "req_id": 1 00:38:07.701 } 00:38:07.701 Got JSON-RPC error response 00:38:07.701 response: 00:38:07.701 { 00:38:07.701 "code": -22, 00:38:07.701 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:38:07.701 } 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:07.701 11:49:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:09.075 "name": "raid_bdev1", 00:38:09.075 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:38:09.075 "strip_size_kb": 0, 00:38:09.075 "state": "online", 00:38:09.075 "raid_level": "raid1", 00:38:09.075 "superblock": true, 00:38:09.075 "num_base_bdevs": 2, 00:38:09.075 "num_base_bdevs_discovered": 1, 00:38:09.075 "num_base_bdevs_operational": 1, 00:38:09.075 "base_bdevs_list": [ 00:38:09.075 { 00:38:09.075 "name": null, 00:38:09.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:09.075 "is_configured": false, 00:38:09.075 "data_offset": 256, 00:38:09.075 "data_size": 7936 00:38:09.075 }, 00:38:09.075 { 00:38:09.075 "name": "BaseBdev2", 00:38:09.075 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:38:09.075 "is_configured": true, 00:38:09.075 "data_offset": 256, 00:38:09.075 "data_size": 7936 00:38:09.075 } 00:38:09.075 ] 00:38:09.075 }' 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:09.075 11:49:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:09.641 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:09.641 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:09.641 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:09.641 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:09.641 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:09.641 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:09.641 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:09.899 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:09.899 "name": "raid_bdev1", 00:38:09.899 "uuid": "0d167396-cfce-4b1a-81f9-fde78d7979eb", 00:38:09.900 "strip_size_kb": 0, 00:38:09.900 "state": "online", 00:38:09.900 "raid_level": "raid1", 00:38:09.900 "superblock": true, 00:38:09.900 "num_base_bdevs": 2, 00:38:09.900 "num_base_bdevs_discovered": 1, 00:38:09.900 "num_base_bdevs_operational": 1, 00:38:09.900 "base_bdevs_list": [ 00:38:09.900 { 00:38:09.900 "name": null, 00:38:09.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:09.900 "is_configured": false, 00:38:09.900 "data_offset": 256, 00:38:09.900 "data_size": 7936 00:38:09.900 }, 00:38:09.900 { 00:38:09.900 "name": "BaseBdev2", 00:38:09.900 "uuid": "1e7757f2-b667-5d94-81e4-f0782071ba30", 00:38:09.900 "is_configured": true, 00:38:09.900 "data_offset": 256, 00:38:09.900 "data_size": 7936 00:38:09.900 } 00:38:09.900 ] 00:38:09.900 }' 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 163103 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 163103 ']' 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 163103 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 163103 00:38:09.900 killing process with pid 163103 00:38:09.900 Received shutdown signal, test time was about 60.000000 seconds 00:38:09.900 00:38:09.900 Latency(us) 00:38:09.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:09.900 =================================================================================================================== 00:38:09.900 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 163103' 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 163103 00:38:09.900 11:49:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 163103 00:38:09.900 [2024-07-13 11:49:44.634552] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:09.900 [2024-07-13 11:49:44.634648] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:09.900 [2024-07-13 11:49:44.634693] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:09.900 [2024-07-13 11:49:44.634740] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:38:10.158 [2024-07-13 11:49:44.852200] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:11.534 ************************************ 00:38:11.534 END TEST raid_rebuild_test_sb_md_separate 00:38:11.534 ************************************ 00:38:11.534 11:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:38:11.534 00:38:11.534 real 0m32.940s 00:38:11.534 user 0m52.965s 00:38:11.534 sys 0m3.291s 00:38:11.534 11:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:11.534 11:49:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:38:11.534 11:49:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:38:11.534 11:49:45 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:38:11.534 11:49:45 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:38:11.534 11:49:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:38:11.534 11:49:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:11.534 11:49:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:11.534 ************************************ 00:38:11.534 START TEST raid_state_function_test_sb_md_interleaved 00:38:11.534 ************************************ 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=164043 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 164043' 00:38:11.534 Process raid pid: 164043 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 164043 /var/tmp/spdk-raid.sock 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 164043 ']' 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:11.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:11.534 11:49:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:11.534 [2024-07-13 11:49:46.007077] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:11.534 [2024-07-13 11:49:46.007501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:11.534 [2024-07-13 11:49:46.184204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.792 [2024-07-13 11:49:46.369401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.050 [2024-07-13 11:49:46.557561] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:12.308 11:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:12.308 11:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:38:12.308 11:49:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:12.566 [2024-07-13 11:49:47.151428] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:12.567 [2024-07-13 11:49:47.151746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:12.567 [2024-07-13 11:49:47.151848] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:12.567 [2024-07-13 11:49:47.151976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:12.567 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:12.824 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:12.824 "name": "Existed_Raid", 00:38:12.824 "uuid": "ad569c28-095b-42c9-b8bf-c7a5c0370b08", 00:38:12.824 "strip_size_kb": 0, 00:38:12.824 "state": "configuring", 00:38:12.824 "raid_level": "raid1", 00:38:12.824 "superblock": true, 00:38:12.824 "num_base_bdevs": 2, 00:38:12.824 "num_base_bdevs_discovered": 0, 00:38:12.824 "num_base_bdevs_operational": 2, 00:38:12.824 "base_bdevs_list": [ 00:38:12.824 { 00:38:12.824 "name": "BaseBdev1", 00:38:12.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:12.824 "is_configured": false, 00:38:12.824 "data_offset": 0, 00:38:12.824 "data_size": 0 00:38:12.824 }, 00:38:12.824 { 00:38:12.824 "name": "BaseBdev2", 00:38:12.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:12.824 "is_configured": false, 00:38:12.825 "data_offset": 0, 00:38:12.825 "data_size": 0 00:38:12.825 } 00:38:12.825 ] 00:38:12.825 }' 00:38:12.825 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:12.825 11:49:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:13.391 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:13.649 [2024-07-13 11:49:48.191489] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:13.649 [2024-07-13 11:49:48.191638] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:38:13.649 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:13.907 [2024-07-13 11:49:48.475567] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:38:13.908 [2024-07-13 11:49:48.475742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:38:13.908 [2024-07-13 11:49:48.475834] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:13.908 [2024-07-13 11:49:48.475955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:13.908 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:38:14.166 [2024-07-13 11:49:48.752096] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:14.166 BaseBdev1 00:38:14.166 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:38:14.166 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:38:14.166 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:14.166 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:38:14.166 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:14.166 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:14.166 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:14.425 11:49:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:38:14.425 [ 00:38:14.425 { 00:38:14.425 "name": "BaseBdev1", 00:38:14.425 "aliases": [ 00:38:14.425 "79254ae6-16a6-49aa-99b7-7134f92a90c8" 00:38:14.425 ], 00:38:14.425 "product_name": "Malloc disk", 00:38:14.425 "block_size": 4128, 00:38:14.425 "num_blocks": 8192, 00:38:14.425 "uuid": "79254ae6-16a6-49aa-99b7-7134f92a90c8", 00:38:14.425 "md_size": 32, 00:38:14.425 "md_interleave": true, 00:38:14.425 "dif_type": 0, 00:38:14.425 "assigned_rate_limits": { 00:38:14.425 "rw_ios_per_sec": 0, 00:38:14.425 "rw_mbytes_per_sec": 0, 00:38:14.425 "r_mbytes_per_sec": 0, 00:38:14.425 "w_mbytes_per_sec": 0 00:38:14.425 }, 00:38:14.425 "claimed": true, 00:38:14.425 "claim_type": "exclusive_write", 00:38:14.425 "zoned": false, 00:38:14.425 "supported_io_types": { 00:38:14.425 "read": true, 00:38:14.425 "write": true, 00:38:14.425 "unmap": true, 00:38:14.425 "flush": true, 00:38:14.425 "reset": true, 00:38:14.425 "nvme_admin": false, 00:38:14.425 "nvme_io": false, 00:38:14.425 "nvme_io_md": false, 00:38:14.425 "write_zeroes": true, 00:38:14.425 "zcopy": true, 00:38:14.425 "get_zone_info": false, 00:38:14.425 "zone_management": false, 00:38:14.425 "zone_append": false, 00:38:14.425 "compare": false, 00:38:14.425 "compare_and_write": false, 00:38:14.425 "abort": true, 00:38:14.425 "seek_hole": false, 00:38:14.425 "seek_data": false, 00:38:14.425 "copy": true, 00:38:14.425 "nvme_iov_md": false 00:38:14.425 }, 00:38:14.425 "memory_domains": [ 00:38:14.425 { 00:38:14.425 "dma_device_id": "system", 00:38:14.425 "dma_device_type": 1 00:38:14.425 }, 00:38:14.425 { 00:38:14.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:14.425 "dma_device_type": 2 00:38:14.425 } 00:38:14.425 ], 00:38:14.425 "driver_specific": {} 00:38:14.425 } 00:38:14.425 ] 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:14.425 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:14.684 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:14.684 "name": "Existed_Raid", 00:38:14.684 "uuid": "1399705d-0efc-4851-a0c7-78223d5c504c", 00:38:14.684 "strip_size_kb": 0, 00:38:14.684 "state": "configuring", 00:38:14.684 "raid_level": "raid1", 00:38:14.684 "superblock": true, 00:38:14.684 "num_base_bdevs": 2, 00:38:14.684 "num_base_bdevs_discovered": 1, 00:38:14.684 "num_base_bdevs_operational": 2, 00:38:14.684 "base_bdevs_list": [ 00:38:14.684 { 00:38:14.684 "name": "BaseBdev1", 00:38:14.684 "uuid": "79254ae6-16a6-49aa-99b7-7134f92a90c8", 00:38:14.684 "is_configured": true, 00:38:14.684 "data_offset": 256, 00:38:14.684 "data_size": 7936 00:38:14.684 }, 00:38:14.684 { 00:38:14.684 "name": "BaseBdev2", 00:38:14.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:14.684 "is_configured": false, 00:38:14.684 "data_offset": 0, 00:38:14.684 "data_size": 0 00:38:14.684 } 00:38:14.684 ] 00:38:14.684 }' 00:38:14.684 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:14.684 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:15.252 11:49:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:38:15.511 [2024-07-13 11:49:50.144355] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:38:15.511 [2024-07-13 11:49:50.144509] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006c80 name Existed_Raid, state configuring 00:38:15.511 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:38:15.770 [2024-07-13 11:49:50.332420] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:15.770 [2024-07-13 11:49:50.334405] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:38:15.770 [2024-07-13 11:49:50.334566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:15.770 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:16.028 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:16.028 "name": "Existed_Raid", 00:38:16.028 "uuid": "6e401383-1878-49f4-99d2-4e58536f12fd", 00:38:16.028 "strip_size_kb": 0, 00:38:16.028 "state": "configuring", 00:38:16.028 "raid_level": "raid1", 00:38:16.028 "superblock": true, 00:38:16.028 "num_base_bdevs": 2, 00:38:16.028 "num_base_bdevs_discovered": 1, 00:38:16.028 "num_base_bdevs_operational": 2, 00:38:16.028 "base_bdevs_list": [ 00:38:16.028 { 00:38:16.028 "name": "BaseBdev1", 00:38:16.028 "uuid": "79254ae6-16a6-49aa-99b7-7134f92a90c8", 00:38:16.028 "is_configured": true, 00:38:16.028 "data_offset": 256, 00:38:16.028 "data_size": 7936 00:38:16.028 }, 00:38:16.028 { 00:38:16.028 "name": "BaseBdev2", 00:38:16.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:16.028 "is_configured": false, 00:38:16.028 "data_offset": 0, 00:38:16.028 "data_size": 0 00:38:16.028 } 00:38:16.028 ] 00:38:16.028 }' 00:38:16.028 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:16.028 11:49:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:16.595 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:38:16.854 [2024-07-13 11:49:51.574080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:16.854 [2024-07-13 11:49:51.574503] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007580 00:38:16.854 [2024-07-13 11:49:51.574622] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:16.854 BaseBdev2 00:38:16.854 [2024-07-13 11:49:51.574765] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:38:16.854 [2024-07-13 11:49:51.574916] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007580 00:38:16.854 [2024-07-13 11:49:51.574930] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007580 00:38:16.854 [2024-07-13 11:49:51.575018] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:16.854 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:38:16.854 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:38:16.854 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:38:16.854 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:38:16.854 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:38:16.854 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:38:16.854 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:38:17.113 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:38:17.372 [ 00:38:17.372 { 00:38:17.372 "name": "BaseBdev2", 00:38:17.372 "aliases": [ 00:38:17.372 "698990c4-eef3-4658-97f8-9606c5029284" 00:38:17.372 ], 00:38:17.372 "product_name": "Malloc disk", 00:38:17.372 "block_size": 4128, 00:38:17.372 "num_blocks": 8192, 00:38:17.372 "uuid": "698990c4-eef3-4658-97f8-9606c5029284", 00:38:17.372 "md_size": 32, 00:38:17.372 "md_interleave": true, 00:38:17.372 "dif_type": 0, 00:38:17.372 "assigned_rate_limits": { 00:38:17.372 "rw_ios_per_sec": 0, 00:38:17.372 "rw_mbytes_per_sec": 0, 00:38:17.372 "r_mbytes_per_sec": 0, 00:38:17.372 "w_mbytes_per_sec": 0 00:38:17.372 }, 00:38:17.372 "claimed": true, 00:38:17.372 "claim_type": "exclusive_write", 00:38:17.372 "zoned": false, 00:38:17.372 "supported_io_types": { 00:38:17.372 "read": true, 00:38:17.372 "write": true, 00:38:17.372 "unmap": true, 00:38:17.372 "flush": true, 00:38:17.372 "reset": true, 00:38:17.372 "nvme_admin": false, 00:38:17.372 "nvme_io": false, 00:38:17.372 "nvme_io_md": false, 00:38:17.372 "write_zeroes": true, 00:38:17.372 "zcopy": true, 00:38:17.372 "get_zone_info": false, 00:38:17.372 "zone_management": false, 00:38:17.372 "zone_append": false, 00:38:17.372 "compare": false, 00:38:17.372 "compare_and_write": false, 00:38:17.372 "abort": true, 00:38:17.372 "seek_hole": false, 00:38:17.372 "seek_data": false, 00:38:17.372 "copy": true, 00:38:17.372 "nvme_iov_md": false 00:38:17.372 }, 00:38:17.372 "memory_domains": [ 00:38:17.372 { 00:38:17.372 "dma_device_id": "system", 00:38:17.372 "dma_device_type": 1 00:38:17.372 }, 00:38:17.372 { 00:38:17.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:17.372 "dma_device_type": 2 00:38:17.372 } 00:38:17.372 ], 00:38:17.372 "driver_specific": {} 00:38:17.372 } 00:38:17.372 ] 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:17.372 11:49:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:17.631 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:17.631 "name": "Existed_Raid", 00:38:17.631 "uuid": "6e401383-1878-49f4-99d2-4e58536f12fd", 00:38:17.631 "strip_size_kb": 0, 00:38:17.631 "state": "online", 00:38:17.631 "raid_level": "raid1", 00:38:17.631 "superblock": true, 00:38:17.631 "num_base_bdevs": 2, 00:38:17.631 "num_base_bdevs_discovered": 2, 00:38:17.631 "num_base_bdevs_operational": 2, 00:38:17.631 "base_bdevs_list": [ 00:38:17.631 { 00:38:17.631 "name": "BaseBdev1", 00:38:17.631 "uuid": "79254ae6-16a6-49aa-99b7-7134f92a90c8", 00:38:17.631 "is_configured": true, 00:38:17.631 "data_offset": 256, 00:38:17.631 "data_size": 7936 00:38:17.631 }, 00:38:17.631 { 00:38:17.631 "name": "BaseBdev2", 00:38:17.631 "uuid": "698990c4-eef3-4658-97f8-9606c5029284", 00:38:17.631 "is_configured": true, 00:38:17.631 "data_offset": 256, 00:38:17.631 "data_size": 7936 00:38:17.631 } 00:38:17.631 ] 00:38:17.631 }' 00:38:17.631 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:17.631 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:18.198 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:38:18.198 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:38:18.198 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:18.198 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:18.198 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:18.198 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:38:18.198 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:38:18.198 11:49:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:18.455 [2024-07-13 11:49:53.022664] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:18.455 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:18.455 "name": "Existed_Raid", 00:38:18.455 "aliases": [ 00:38:18.455 "6e401383-1878-49f4-99d2-4e58536f12fd" 00:38:18.455 ], 00:38:18.455 "product_name": "Raid Volume", 00:38:18.455 "block_size": 4128, 00:38:18.455 "num_blocks": 7936, 00:38:18.455 "uuid": "6e401383-1878-49f4-99d2-4e58536f12fd", 00:38:18.455 "md_size": 32, 00:38:18.455 "md_interleave": true, 00:38:18.455 "dif_type": 0, 00:38:18.455 "assigned_rate_limits": { 00:38:18.455 "rw_ios_per_sec": 0, 00:38:18.455 "rw_mbytes_per_sec": 0, 00:38:18.455 "r_mbytes_per_sec": 0, 00:38:18.455 "w_mbytes_per_sec": 0 00:38:18.455 }, 00:38:18.455 "claimed": false, 00:38:18.455 "zoned": false, 00:38:18.455 "supported_io_types": { 00:38:18.455 "read": true, 00:38:18.455 "write": true, 00:38:18.455 "unmap": false, 00:38:18.455 "flush": false, 00:38:18.455 "reset": true, 00:38:18.455 "nvme_admin": false, 00:38:18.455 "nvme_io": false, 00:38:18.455 "nvme_io_md": false, 00:38:18.455 "write_zeroes": true, 00:38:18.455 "zcopy": false, 00:38:18.455 "get_zone_info": false, 00:38:18.455 "zone_management": false, 00:38:18.455 "zone_append": false, 00:38:18.455 "compare": false, 00:38:18.455 "compare_and_write": false, 00:38:18.455 "abort": false, 00:38:18.455 "seek_hole": false, 00:38:18.455 "seek_data": false, 00:38:18.455 "copy": false, 00:38:18.455 "nvme_iov_md": false 00:38:18.455 }, 00:38:18.455 "memory_domains": [ 00:38:18.455 { 00:38:18.455 "dma_device_id": "system", 00:38:18.455 "dma_device_type": 1 00:38:18.455 }, 00:38:18.455 { 00:38:18.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:18.455 "dma_device_type": 2 00:38:18.455 }, 00:38:18.455 { 00:38:18.455 "dma_device_id": "system", 00:38:18.455 "dma_device_type": 1 00:38:18.455 }, 00:38:18.455 { 00:38:18.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:18.455 "dma_device_type": 2 00:38:18.455 } 00:38:18.455 ], 00:38:18.455 "driver_specific": { 00:38:18.455 "raid": { 00:38:18.455 "uuid": "6e401383-1878-49f4-99d2-4e58536f12fd", 00:38:18.455 "strip_size_kb": 0, 00:38:18.455 "state": "online", 00:38:18.455 "raid_level": "raid1", 00:38:18.455 "superblock": true, 00:38:18.455 "num_base_bdevs": 2, 00:38:18.455 "num_base_bdevs_discovered": 2, 00:38:18.455 "num_base_bdevs_operational": 2, 00:38:18.455 "base_bdevs_list": [ 00:38:18.455 { 00:38:18.455 "name": "BaseBdev1", 00:38:18.455 "uuid": "79254ae6-16a6-49aa-99b7-7134f92a90c8", 00:38:18.455 "is_configured": true, 00:38:18.455 "data_offset": 256, 00:38:18.455 "data_size": 7936 00:38:18.455 }, 00:38:18.455 { 00:38:18.455 "name": "BaseBdev2", 00:38:18.455 "uuid": "698990c4-eef3-4658-97f8-9606c5029284", 00:38:18.455 "is_configured": true, 00:38:18.455 "data_offset": 256, 00:38:18.456 "data_size": 7936 00:38:18.456 } 00:38:18.456 ] 00:38:18.456 } 00:38:18.456 } 00:38:18.456 }' 00:38:18.456 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:18.456 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:38:18.456 BaseBdev2' 00:38:18.456 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:18.456 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:38:18.456 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:18.714 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:18.714 "name": "BaseBdev1", 00:38:18.714 "aliases": [ 00:38:18.714 "79254ae6-16a6-49aa-99b7-7134f92a90c8" 00:38:18.714 ], 00:38:18.714 "product_name": "Malloc disk", 00:38:18.714 "block_size": 4128, 00:38:18.714 "num_blocks": 8192, 00:38:18.714 "uuid": "79254ae6-16a6-49aa-99b7-7134f92a90c8", 00:38:18.714 "md_size": 32, 00:38:18.714 "md_interleave": true, 00:38:18.714 "dif_type": 0, 00:38:18.714 "assigned_rate_limits": { 00:38:18.714 "rw_ios_per_sec": 0, 00:38:18.714 "rw_mbytes_per_sec": 0, 00:38:18.714 "r_mbytes_per_sec": 0, 00:38:18.714 "w_mbytes_per_sec": 0 00:38:18.714 }, 00:38:18.714 "claimed": true, 00:38:18.714 "claim_type": "exclusive_write", 00:38:18.714 "zoned": false, 00:38:18.714 "supported_io_types": { 00:38:18.714 "read": true, 00:38:18.714 "write": true, 00:38:18.714 "unmap": true, 00:38:18.714 "flush": true, 00:38:18.714 "reset": true, 00:38:18.714 "nvme_admin": false, 00:38:18.714 "nvme_io": false, 00:38:18.714 "nvme_io_md": false, 00:38:18.714 "write_zeroes": true, 00:38:18.714 "zcopy": true, 00:38:18.714 "get_zone_info": false, 00:38:18.714 "zone_management": false, 00:38:18.714 "zone_append": false, 00:38:18.714 "compare": false, 00:38:18.714 "compare_and_write": false, 00:38:18.714 "abort": true, 00:38:18.714 "seek_hole": false, 00:38:18.714 "seek_data": false, 00:38:18.714 "copy": true, 00:38:18.714 "nvme_iov_md": false 00:38:18.714 }, 00:38:18.714 "memory_domains": [ 00:38:18.714 { 00:38:18.714 "dma_device_id": "system", 00:38:18.714 "dma_device_type": 1 00:38:18.714 }, 00:38:18.714 { 00:38:18.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:18.714 "dma_device_type": 2 00:38:18.714 } 00:38:18.714 ], 00:38:18.714 "driver_specific": {} 00:38:18.714 }' 00:38:18.714 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:18.714 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:18.714 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:18.714 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:18.972 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:18.972 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:18.972 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:18.972 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:18.972 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:18.972 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:18.972 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:19.230 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:19.230 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:19.230 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:38:19.230 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:19.230 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:19.230 "name": "BaseBdev2", 00:38:19.230 "aliases": [ 00:38:19.230 "698990c4-eef3-4658-97f8-9606c5029284" 00:38:19.230 ], 00:38:19.230 "product_name": "Malloc disk", 00:38:19.230 "block_size": 4128, 00:38:19.230 "num_blocks": 8192, 00:38:19.230 "uuid": "698990c4-eef3-4658-97f8-9606c5029284", 00:38:19.230 "md_size": 32, 00:38:19.230 "md_interleave": true, 00:38:19.230 "dif_type": 0, 00:38:19.230 "assigned_rate_limits": { 00:38:19.230 "rw_ios_per_sec": 0, 00:38:19.230 "rw_mbytes_per_sec": 0, 00:38:19.230 "r_mbytes_per_sec": 0, 00:38:19.230 "w_mbytes_per_sec": 0 00:38:19.230 }, 00:38:19.230 "claimed": true, 00:38:19.230 "claim_type": "exclusive_write", 00:38:19.230 "zoned": false, 00:38:19.230 "supported_io_types": { 00:38:19.230 "read": true, 00:38:19.230 "write": true, 00:38:19.230 "unmap": true, 00:38:19.230 "flush": true, 00:38:19.230 "reset": true, 00:38:19.230 "nvme_admin": false, 00:38:19.230 "nvme_io": false, 00:38:19.230 "nvme_io_md": false, 00:38:19.230 "write_zeroes": true, 00:38:19.230 "zcopy": true, 00:38:19.230 "get_zone_info": false, 00:38:19.230 "zone_management": false, 00:38:19.230 "zone_append": false, 00:38:19.230 "compare": false, 00:38:19.230 "compare_and_write": false, 00:38:19.230 "abort": true, 00:38:19.230 "seek_hole": false, 00:38:19.230 "seek_data": false, 00:38:19.230 "copy": true, 00:38:19.230 "nvme_iov_md": false 00:38:19.230 }, 00:38:19.230 "memory_domains": [ 00:38:19.230 { 00:38:19.231 "dma_device_id": "system", 00:38:19.231 "dma_device_type": 1 00:38:19.231 }, 00:38:19.231 { 00:38:19.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:19.231 "dma_device_type": 2 00:38:19.231 } 00:38:19.231 ], 00:38:19.231 "driver_specific": {} 00:38:19.231 }' 00:38:19.231 11:49:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:19.489 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:19.489 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:19.489 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:19.489 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:19.489 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:19.489 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:19.747 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:19.747 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:19.747 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:19.747 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:19.747 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:19.747 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:38:20.006 [2024-07-13 11:49:54.658755] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:20.006 11:49:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:38:20.264 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:20.264 "name": "Existed_Raid", 00:38:20.264 "uuid": "6e401383-1878-49f4-99d2-4e58536f12fd", 00:38:20.264 "strip_size_kb": 0, 00:38:20.264 "state": "online", 00:38:20.264 "raid_level": "raid1", 00:38:20.264 "superblock": true, 00:38:20.264 "num_base_bdevs": 2, 00:38:20.264 "num_base_bdevs_discovered": 1, 00:38:20.264 "num_base_bdevs_operational": 1, 00:38:20.264 "base_bdevs_list": [ 00:38:20.264 { 00:38:20.264 "name": null, 00:38:20.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:20.264 "is_configured": false, 00:38:20.264 "data_offset": 256, 00:38:20.264 "data_size": 7936 00:38:20.264 }, 00:38:20.264 { 00:38:20.264 "name": "BaseBdev2", 00:38:20.264 "uuid": "698990c4-eef3-4658-97f8-9606c5029284", 00:38:20.264 "is_configured": true, 00:38:20.264 "data_offset": 256, 00:38:20.264 "data_size": 7936 00:38:20.264 } 00:38:20.264 ] 00:38:20.264 }' 00:38:20.264 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:20.264 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:21.201 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:38:21.201 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:38:21.201 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:21.201 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:38:21.201 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:38:21.201 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:38:21.201 11:49:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:38:21.459 [2024-07-13 11:49:56.181887] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:38:21.459 [2024-07-13 11:49:56.182171] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:21.717 [2024-07-13 11:49:56.249440] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:21.717 [2024-07-13 11:49:56.249650] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:21.717 [2024-07-13 11:49:56.249786] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007580 name Existed_Raid, state offline 00:38:21.717 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:38:21.717 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:38:21.717 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:21.717 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 164043 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 164043 ']' 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 164043 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164043 00:38:21.976 killing process with pid 164043 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164043' 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 164043 00:38:21.976 11:49:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 164043 00:38:21.976 [2024-07-13 11:49:56.547753] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:21.976 [2024-07-13 11:49:56.547838] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:22.957 ************************************ 00:38:22.957 END TEST raid_state_function_test_sb_md_interleaved 00:38:22.957 ************************************ 00:38:22.957 11:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:38:22.957 00:38:22.957 real 0m11.538s 00:38:22.957 user 0m20.644s 00:38:22.957 sys 0m1.315s 00:38:22.957 11:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:22.957 11:49:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:22.957 11:49:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:38:22.957 11:49:57 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:38:22.957 11:49:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:38:22.957 11:49:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:22.957 11:49:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:22.957 ************************************ 00:38:22.957 START TEST raid_superblock_test_md_interleaved 00:38:22.957 ************************************ 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=164439 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 164439 /var/tmp/spdk-raid.sock 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 164439 ']' 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:22.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:22.957 11:49:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:22.957 [2024-07-13 11:49:57.602316] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:22.957 [2024-07-13 11:49:57.602751] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164439 ] 00:38:23.223 [2024-07-13 11:49:57.773829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.481 [2024-07-13 11:49:58.013211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.481 [2024-07-13 11:49:58.197959] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:24.046 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:38:24.047 malloc1 00:38:24.047 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:24.305 [2024-07-13 11:49:58.889964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:24.305 [2024-07-13 11:49:58.890285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:24.305 [2024-07-13 11:49:58.890352] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:38:24.305 [2024-07-13 11:49:58.890634] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:24.305 [2024-07-13 11:49:58.892628] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:24.305 [2024-07-13 11:49:58.892791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:24.305 pt1 00:38:24.305 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:38:24.305 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:38:24.305 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:38:24.305 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:38:24.305 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:38:24.305 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:38:24.305 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:38:24.305 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:38:24.305 11:49:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:38:24.564 malloc2 00:38:24.564 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:24.564 [2024-07-13 11:49:59.309936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:24.564 [2024-07-13 11:49:59.310178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:24.564 [2024-07-13 11:49:59.310318] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:38:24.564 [2024-07-13 11:49:59.310426] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:24.564 [2024-07-13 11:49:59.312447] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:24.564 [2024-07-13 11:49:59.312608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:24.564 pt2 00:38:24.822 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:38:24.822 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:38:24.822 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:38:24.822 [2024-07-13 11:49:59.570040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:24.822 [2024-07-13 11:49:59.571865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:24.822 [2024-07-13 11:49:59.572207] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:38:24.822 [2024-07-13 11:49:59.572336] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:24.822 [2024-07-13 11:49:59.572500] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:38:24.822 [2024-07-13 11:49:59.572712] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:38:24.822 [2024-07-13 11:49:59.572907] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:38:24.822 [2024-07-13 11:49:59.573070] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:25.081 "name": "raid_bdev1", 00:38:25.081 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:25.081 "strip_size_kb": 0, 00:38:25.081 "state": "online", 00:38:25.081 "raid_level": "raid1", 00:38:25.081 "superblock": true, 00:38:25.081 "num_base_bdevs": 2, 00:38:25.081 "num_base_bdevs_discovered": 2, 00:38:25.081 "num_base_bdevs_operational": 2, 00:38:25.081 "base_bdevs_list": [ 00:38:25.081 { 00:38:25.081 "name": "pt1", 00:38:25.081 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:25.081 "is_configured": true, 00:38:25.081 "data_offset": 256, 00:38:25.081 "data_size": 7936 00:38:25.081 }, 00:38:25.081 { 00:38:25.081 "name": "pt2", 00:38:25.081 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:25.081 "is_configured": true, 00:38:25.081 "data_offset": 256, 00:38:25.081 "data_size": 7936 00:38:25.081 } 00:38:25.081 ] 00:38:25.081 }' 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:25.081 11:49:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:26.015 [2024-07-13 11:50:00.594355] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:26.015 "name": "raid_bdev1", 00:38:26.015 "aliases": [ 00:38:26.015 "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d" 00:38:26.015 ], 00:38:26.015 "product_name": "Raid Volume", 00:38:26.015 "block_size": 4128, 00:38:26.015 "num_blocks": 7936, 00:38:26.015 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:26.015 "md_size": 32, 00:38:26.015 "md_interleave": true, 00:38:26.015 "dif_type": 0, 00:38:26.015 "assigned_rate_limits": { 00:38:26.015 "rw_ios_per_sec": 0, 00:38:26.015 "rw_mbytes_per_sec": 0, 00:38:26.015 "r_mbytes_per_sec": 0, 00:38:26.015 "w_mbytes_per_sec": 0 00:38:26.015 }, 00:38:26.015 "claimed": false, 00:38:26.015 "zoned": false, 00:38:26.015 "supported_io_types": { 00:38:26.015 "read": true, 00:38:26.015 "write": true, 00:38:26.015 "unmap": false, 00:38:26.015 "flush": false, 00:38:26.015 "reset": true, 00:38:26.015 "nvme_admin": false, 00:38:26.015 "nvme_io": false, 00:38:26.015 "nvme_io_md": false, 00:38:26.015 "write_zeroes": true, 00:38:26.015 "zcopy": false, 00:38:26.015 "get_zone_info": false, 00:38:26.015 "zone_management": false, 00:38:26.015 "zone_append": false, 00:38:26.015 "compare": false, 00:38:26.015 "compare_and_write": false, 00:38:26.015 "abort": false, 00:38:26.015 "seek_hole": false, 00:38:26.015 "seek_data": false, 00:38:26.015 "copy": false, 00:38:26.015 "nvme_iov_md": false 00:38:26.015 }, 00:38:26.015 "memory_domains": [ 00:38:26.015 { 00:38:26.015 "dma_device_id": "system", 00:38:26.015 "dma_device_type": 1 00:38:26.015 }, 00:38:26.015 { 00:38:26.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:26.015 "dma_device_type": 2 00:38:26.015 }, 00:38:26.015 { 00:38:26.015 "dma_device_id": "system", 00:38:26.015 "dma_device_type": 1 00:38:26.015 }, 00:38:26.015 { 00:38:26.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:26.015 "dma_device_type": 2 00:38:26.015 } 00:38:26.015 ], 00:38:26.015 "driver_specific": { 00:38:26.015 "raid": { 00:38:26.015 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:26.015 "strip_size_kb": 0, 00:38:26.015 "state": "online", 00:38:26.015 "raid_level": "raid1", 00:38:26.015 "superblock": true, 00:38:26.015 "num_base_bdevs": 2, 00:38:26.015 "num_base_bdevs_discovered": 2, 00:38:26.015 "num_base_bdevs_operational": 2, 00:38:26.015 "base_bdevs_list": [ 00:38:26.015 { 00:38:26.015 "name": "pt1", 00:38:26.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:26.015 "is_configured": true, 00:38:26.015 "data_offset": 256, 00:38:26.015 "data_size": 7936 00:38:26.015 }, 00:38:26.015 { 00:38:26.015 "name": "pt2", 00:38:26.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:26.015 "is_configured": true, 00:38:26.015 "data_offset": 256, 00:38:26.015 "data_size": 7936 00:38:26.015 } 00:38:26.015 ] 00:38:26.015 } 00:38:26.015 } 00:38:26.015 }' 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:38:26.015 pt2' 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:38:26.015 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:26.272 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:26.272 "name": "pt1", 00:38:26.272 "aliases": [ 00:38:26.272 "00000000-0000-0000-0000-000000000001" 00:38:26.272 ], 00:38:26.272 "product_name": "passthru", 00:38:26.272 "block_size": 4128, 00:38:26.272 "num_blocks": 8192, 00:38:26.272 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:26.272 "md_size": 32, 00:38:26.272 "md_interleave": true, 00:38:26.272 "dif_type": 0, 00:38:26.272 "assigned_rate_limits": { 00:38:26.272 "rw_ios_per_sec": 0, 00:38:26.272 "rw_mbytes_per_sec": 0, 00:38:26.272 "r_mbytes_per_sec": 0, 00:38:26.272 "w_mbytes_per_sec": 0 00:38:26.272 }, 00:38:26.272 "claimed": true, 00:38:26.272 "claim_type": "exclusive_write", 00:38:26.272 "zoned": false, 00:38:26.272 "supported_io_types": { 00:38:26.272 "read": true, 00:38:26.272 "write": true, 00:38:26.272 "unmap": true, 00:38:26.272 "flush": true, 00:38:26.272 "reset": true, 00:38:26.272 "nvme_admin": false, 00:38:26.272 "nvme_io": false, 00:38:26.272 "nvme_io_md": false, 00:38:26.272 "write_zeroes": true, 00:38:26.272 "zcopy": true, 00:38:26.272 "get_zone_info": false, 00:38:26.272 "zone_management": false, 00:38:26.272 "zone_append": false, 00:38:26.272 "compare": false, 00:38:26.272 "compare_and_write": false, 00:38:26.272 "abort": true, 00:38:26.272 "seek_hole": false, 00:38:26.272 "seek_data": false, 00:38:26.272 "copy": true, 00:38:26.272 "nvme_iov_md": false 00:38:26.272 }, 00:38:26.272 "memory_domains": [ 00:38:26.272 { 00:38:26.272 "dma_device_id": "system", 00:38:26.273 "dma_device_type": 1 00:38:26.273 }, 00:38:26.273 { 00:38:26.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:26.273 "dma_device_type": 2 00:38:26.273 } 00:38:26.273 ], 00:38:26.273 "driver_specific": { 00:38:26.273 "passthru": { 00:38:26.273 "name": "pt1", 00:38:26.273 "base_bdev_name": "malloc1" 00:38:26.273 } 00:38:26.273 } 00:38:26.273 }' 00:38:26.273 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:26.273 11:50:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:38:26.529 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:26.787 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:26.787 "name": "pt2", 00:38:26.787 "aliases": [ 00:38:26.787 "00000000-0000-0000-0000-000000000002" 00:38:26.787 ], 00:38:26.787 "product_name": "passthru", 00:38:26.787 "block_size": 4128, 00:38:26.787 "num_blocks": 8192, 00:38:26.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:26.787 "md_size": 32, 00:38:26.787 "md_interleave": true, 00:38:26.787 "dif_type": 0, 00:38:26.787 "assigned_rate_limits": { 00:38:26.787 "rw_ios_per_sec": 0, 00:38:26.787 "rw_mbytes_per_sec": 0, 00:38:26.787 "r_mbytes_per_sec": 0, 00:38:26.787 "w_mbytes_per_sec": 0 00:38:26.787 }, 00:38:26.787 "claimed": true, 00:38:26.787 "claim_type": "exclusive_write", 00:38:26.787 "zoned": false, 00:38:26.787 "supported_io_types": { 00:38:26.787 "read": true, 00:38:26.787 "write": true, 00:38:26.787 "unmap": true, 00:38:26.787 "flush": true, 00:38:26.787 "reset": true, 00:38:26.787 "nvme_admin": false, 00:38:26.787 "nvme_io": false, 00:38:26.787 "nvme_io_md": false, 00:38:26.787 "write_zeroes": true, 00:38:26.787 "zcopy": true, 00:38:26.787 "get_zone_info": false, 00:38:26.787 "zone_management": false, 00:38:26.787 "zone_append": false, 00:38:26.787 "compare": false, 00:38:26.787 "compare_and_write": false, 00:38:26.787 "abort": true, 00:38:26.787 "seek_hole": false, 00:38:26.787 "seek_data": false, 00:38:26.787 "copy": true, 00:38:26.787 "nvme_iov_md": false 00:38:26.787 }, 00:38:26.787 "memory_domains": [ 00:38:26.787 { 00:38:26.787 "dma_device_id": "system", 00:38:26.787 "dma_device_type": 1 00:38:26.787 }, 00:38:26.787 { 00:38:26.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:26.787 "dma_device_type": 2 00:38:26.787 } 00:38:26.787 ], 00:38:26.787 "driver_specific": { 00:38:26.787 "passthru": { 00:38:26.787 "name": "pt2", 00:38:26.787 "base_bdev_name": "malloc2" 00:38:26.787 } 00:38:26.787 } 00:38:26.787 }' 00:38:26.787 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:26.787 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:27.044 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:27.044 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:27.044 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:27.044 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:27.044 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:27.044 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:27.044 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:27.044 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:27.044 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:27.302 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:27.302 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:27.302 11:50:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:38:27.559 [2024-07-13 11:50:02.070570] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:27.559 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d 00:38:27.559 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d ']' 00:38:27.559 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:27.817 [2024-07-13 11:50:02.330405] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:27.817 [2024-07-13 11:50:02.330541] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:27.817 [2024-07-13 11:50:02.330692] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:27.817 [2024-07-13 11:50:02.330842] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:27.817 [2024-07-13 11:50:02.330990] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:38:27.817 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:27.817 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:38:27.817 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:38:27.817 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:38:27.817 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:38:27.817 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:38:28.074 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:38:28.074 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:28.331 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:38:28.331 11:50:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:38:28.589 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:38:28.589 [2024-07-13 11:50:03.323206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:38:28.589 [2024-07-13 11:50:03.325148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:38:28.589 [2024-07-13 11:50:03.325363] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:38:28.589 [2024-07-13 11:50:03.325571] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:38:28.590 [2024-07-13 11:50:03.325719] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:28.590 [2024-07-13 11:50:03.325806] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state configuring 00:38:28.590 request: 00:38:28.590 { 00:38:28.590 "name": "raid_bdev1", 00:38:28.590 "raid_level": "raid1", 00:38:28.590 "base_bdevs": [ 00:38:28.590 "malloc1", 00:38:28.590 "malloc2" 00:38:28.590 ], 00:38:28.590 "superblock": false, 00:38:28.590 "method": "bdev_raid_create", 00:38:28.590 "req_id": 1 00:38:28.590 } 00:38:28.590 Got JSON-RPC error response 00:38:28.590 response: 00:38:28.590 { 00:38:28.590 "code": -17, 00:38:28.590 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:38:28.590 } 00:38:28.590 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:38:28.590 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:28.590 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:38:28.590 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:28.590 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:28.590 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:38:28.848 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:38:28.848 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:38:28.848 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:29.106 [2024-07-13 11:50:03.827309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:29.106 [2024-07-13 11:50:03.827501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:29.106 [2024-07-13 11:50:03.827566] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:38:29.106 [2024-07-13 11:50:03.827838] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:29.106 [2024-07-13 11:50:03.829560] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:29.106 [2024-07-13 11:50:03.829743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:29.106 [2024-07-13 11:50:03.829899] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:29.106 [2024-07-13 11:50:03.830057] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:29.106 pt1 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:29.106 11:50:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:29.364 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:29.364 "name": "raid_bdev1", 00:38:29.364 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:29.364 "strip_size_kb": 0, 00:38:29.364 "state": "configuring", 00:38:29.364 "raid_level": "raid1", 00:38:29.364 "superblock": true, 00:38:29.364 "num_base_bdevs": 2, 00:38:29.364 "num_base_bdevs_discovered": 1, 00:38:29.364 "num_base_bdevs_operational": 2, 00:38:29.364 "base_bdevs_list": [ 00:38:29.364 { 00:38:29.364 "name": "pt1", 00:38:29.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:29.364 "is_configured": true, 00:38:29.364 "data_offset": 256, 00:38:29.364 "data_size": 7936 00:38:29.364 }, 00:38:29.364 { 00:38:29.364 "name": null, 00:38:29.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:29.364 "is_configured": false, 00:38:29.364 "data_offset": 256, 00:38:29.364 "data_size": 7936 00:38:29.364 } 00:38:29.364 ] 00:38:29.364 }' 00:38:29.364 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:29.364 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:30.311 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:38:30.311 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:38:30.311 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:30.311 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:30.311 [2024-07-13 11:50:04.939158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:30.311 [2024-07-13 11:50:04.939349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:30.311 [2024-07-13 11:50:04.939410] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:38:30.311 [2024-07-13 11:50:04.939658] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:30.311 [2024-07-13 11:50:04.939838] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:30.312 [2024-07-13 11:50:04.939997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:30.312 [2024-07-13 11:50:04.940088] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:30.312 [2024-07-13 11:50:04.940146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:30.312 [2024-07-13 11:50:04.940488] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:38:30.312 [2024-07-13 11:50:04.940588] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:30.312 [2024-07-13 11:50:04.940699] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:30.312 [2024-07-13 11:50:04.940888] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:38:30.312 [2024-07-13 11:50:04.941006] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:38:30.312 [2024-07-13 11:50:04.941151] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:30.312 pt2 00:38:30.312 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:38:30.312 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:38:30.312 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:30.312 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:30.312 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:30.312 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:30.312 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:30.312 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:30.313 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:30.313 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:30.313 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:30.313 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:30.313 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:30.313 11:50:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:30.570 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:30.570 "name": "raid_bdev1", 00:38:30.570 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:30.570 "strip_size_kb": 0, 00:38:30.570 "state": "online", 00:38:30.570 "raid_level": "raid1", 00:38:30.570 "superblock": true, 00:38:30.570 "num_base_bdevs": 2, 00:38:30.570 "num_base_bdevs_discovered": 2, 00:38:30.570 "num_base_bdevs_operational": 2, 00:38:30.570 "base_bdevs_list": [ 00:38:30.570 { 00:38:30.570 "name": "pt1", 00:38:30.570 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:30.570 "is_configured": true, 00:38:30.570 "data_offset": 256, 00:38:30.570 "data_size": 7936 00:38:30.570 }, 00:38:30.570 { 00:38:30.570 "name": "pt2", 00:38:30.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:30.570 "is_configured": true, 00:38:30.570 "data_offset": 256, 00:38:30.570 "data_size": 7936 00:38:30.570 } 00:38:30.570 ] 00:38:30.570 }' 00:38:30.570 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:30.570 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:31.135 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:38:31.135 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:38:31.135 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:38:31.135 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:38:31.135 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:38:31.135 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:38:31.135 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:31.135 11:50:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:38:31.394 [2024-07-13 11:50:06.019581] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:31.394 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:38:31.394 "name": "raid_bdev1", 00:38:31.394 "aliases": [ 00:38:31.394 "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d" 00:38:31.394 ], 00:38:31.394 "product_name": "Raid Volume", 00:38:31.394 "block_size": 4128, 00:38:31.394 "num_blocks": 7936, 00:38:31.394 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:31.394 "md_size": 32, 00:38:31.394 "md_interleave": true, 00:38:31.394 "dif_type": 0, 00:38:31.394 "assigned_rate_limits": { 00:38:31.394 "rw_ios_per_sec": 0, 00:38:31.394 "rw_mbytes_per_sec": 0, 00:38:31.394 "r_mbytes_per_sec": 0, 00:38:31.394 "w_mbytes_per_sec": 0 00:38:31.394 }, 00:38:31.394 "claimed": false, 00:38:31.394 "zoned": false, 00:38:31.394 "supported_io_types": { 00:38:31.394 "read": true, 00:38:31.394 "write": true, 00:38:31.394 "unmap": false, 00:38:31.394 "flush": false, 00:38:31.394 "reset": true, 00:38:31.394 "nvme_admin": false, 00:38:31.394 "nvme_io": false, 00:38:31.394 "nvme_io_md": false, 00:38:31.394 "write_zeroes": true, 00:38:31.394 "zcopy": false, 00:38:31.394 "get_zone_info": false, 00:38:31.394 "zone_management": false, 00:38:31.394 "zone_append": false, 00:38:31.394 "compare": false, 00:38:31.394 "compare_and_write": false, 00:38:31.394 "abort": false, 00:38:31.394 "seek_hole": false, 00:38:31.394 "seek_data": false, 00:38:31.394 "copy": false, 00:38:31.394 "nvme_iov_md": false 00:38:31.394 }, 00:38:31.394 "memory_domains": [ 00:38:31.394 { 00:38:31.394 "dma_device_id": "system", 00:38:31.394 "dma_device_type": 1 00:38:31.394 }, 00:38:31.394 { 00:38:31.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:31.394 "dma_device_type": 2 00:38:31.394 }, 00:38:31.394 { 00:38:31.394 "dma_device_id": "system", 00:38:31.394 "dma_device_type": 1 00:38:31.394 }, 00:38:31.394 { 00:38:31.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:31.394 "dma_device_type": 2 00:38:31.394 } 00:38:31.394 ], 00:38:31.394 "driver_specific": { 00:38:31.394 "raid": { 00:38:31.394 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:31.394 "strip_size_kb": 0, 00:38:31.394 "state": "online", 00:38:31.394 "raid_level": "raid1", 00:38:31.394 "superblock": true, 00:38:31.394 "num_base_bdevs": 2, 00:38:31.394 "num_base_bdevs_discovered": 2, 00:38:31.394 "num_base_bdevs_operational": 2, 00:38:31.394 "base_bdevs_list": [ 00:38:31.394 { 00:38:31.394 "name": "pt1", 00:38:31.394 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:31.394 "is_configured": true, 00:38:31.394 "data_offset": 256, 00:38:31.394 "data_size": 7936 00:38:31.394 }, 00:38:31.394 { 00:38:31.394 "name": "pt2", 00:38:31.394 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:31.394 "is_configured": true, 00:38:31.394 "data_offset": 256, 00:38:31.394 "data_size": 7936 00:38:31.394 } 00:38:31.394 ] 00:38:31.394 } 00:38:31.394 } 00:38:31.394 }' 00:38:31.394 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:38:31.394 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:38:31.394 pt2' 00:38:31.394 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:31.394 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:38:31.394 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:31.653 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:31.653 "name": "pt1", 00:38:31.653 "aliases": [ 00:38:31.653 "00000000-0000-0000-0000-000000000001" 00:38:31.653 ], 00:38:31.653 "product_name": "passthru", 00:38:31.653 "block_size": 4128, 00:38:31.653 "num_blocks": 8192, 00:38:31.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:38:31.653 "md_size": 32, 00:38:31.653 "md_interleave": true, 00:38:31.653 "dif_type": 0, 00:38:31.653 "assigned_rate_limits": { 00:38:31.653 "rw_ios_per_sec": 0, 00:38:31.653 "rw_mbytes_per_sec": 0, 00:38:31.653 "r_mbytes_per_sec": 0, 00:38:31.653 "w_mbytes_per_sec": 0 00:38:31.653 }, 00:38:31.653 "claimed": true, 00:38:31.653 "claim_type": "exclusive_write", 00:38:31.653 "zoned": false, 00:38:31.653 "supported_io_types": { 00:38:31.653 "read": true, 00:38:31.653 "write": true, 00:38:31.653 "unmap": true, 00:38:31.653 "flush": true, 00:38:31.653 "reset": true, 00:38:31.653 "nvme_admin": false, 00:38:31.653 "nvme_io": false, 00:38:31.653 "nvme_io_md": false, 00:38:31.653 "write_zeroes": true, 00:38:31.653 "zcopy": true, 00:38:31.653 "get_zone_info": false, 00:38:31.653 "zone_management": false, 00:38:31.653 "zone_append": false, 00:38:31.653 "compare": false, 00:38:31.653 "compare_and_write": false, 00:38:31.653 "abort": true, 00:38:31.653 "seek_hole": false, 00:38:31.653 "seek_data": false, 00:38:31.653 "copy": true, 00:38:31.653 "nvme_iov_md": false 00:38:31.653 }, 00:38:31.653 "memory_domains": [ 00:38:31.653 { 00:38:31.653 "dma_device_id": "system", 00:38:31.653 "dma_device_type": 1 00:38:31.653 }, 00:38:31.653 { 00:38:31.653 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:31.653 "dma_device_type": 2 00:38:31.653 } 00:38:31.653 ], 00:38:31.653 "driver_specific": { 00:38:31.653 "passthru": { 00:38:31.653 "name": "pt1", 00:38:31.653 "base_bdev_name": "malloc1" 00:38:31.653 } 00:38:31.653 } 00:38:31.653 }' 00:38:31.653 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:31.653 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:31.653 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:31.653 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:31.912 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:31.912 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:31.912 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:31.912 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:31.912 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:31.912 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:32.171 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:32.171 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:32.171 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:38:32.171 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:38:32.171 11:50:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:38:32.429 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:38:32.429 "name": "pt2", 00:38:32.429 "aliases": [ 00:38:32.429 "00000000-0000-0000-0000-000000000002" 00:38:32.429 ], 00:38:32.429 "product_name": "passthru", 00:38:32.429 "block_size": 4128, 00:38:32.429 "num_blocks": 8192, 00:38:32.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:32.429 "md_size": 32, 00:38:32.429 "md_interleave": true, 00:38:32.429 "dif_type": 0, 00:38:32.429 "assigned_rate_limits": { 00:38:32.429 "rw_ios_per_sec": 0, 00:38:32.429 "rw_mbytes_per_sec": 0, 00:38:32.429 "r_mbytes_per_sec": 0, 00:38:32.429 "w_mbytes_per_sec": 0 00:38:32.429 }, 00:38:32.429 "claimed": true, 00:38:32.429 "claim_type": "exclusive_write", 00:38:32.429 "zoned": false, 00:38:32.429 "supported_io_types": { 00:38:32.429 "read": true, 00:38:32.429 "write": true, 00:38:32.429 "unmap": true, 00:38:32.429 "flush": true, 00:38:32.429 "reset": true, 00:38:32.429 "nvme_admin": false, 00:38:32.429 "nvme_io": false, 00:38:32.429 "nvme_io_md": false, 00:38:32.429 "write_zeroes": true, 00:38:32.429 "zcopy": true, 00:38:32.429 "get_zone_info": false, 00:38:32.429 "zone_management": false, 00:38:32.429 "zone_append": false, 00:38:32.429 "compare": false, 00:38:32.429 "compare_and_write": false, 00:38:32.429 "abort": true, 00:38:32.429 "seek_hole": false, 00:38:32.429 "seek_data": false, 00:38:32.429 "copy": true, 00:38:32.429 "nvme_iov_md": false 00:38:32.429 }, 00:38:32.429 "memory_domains": [ 00:38:32.429 { 00:38:32.429 "dma_device_id": "system", 00:38:32.429 "dma_device_type": 1 00:38:32.429 }, 00:38:32.429 { 00:38:32.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:38:32.429 "dma_device_type": 2 00:38:32.429 } 00:38:32.429 ], 00:38:32.429 "driver_specific": { 00:38:32.429 "passthru": { 00:38:32.429 "name": "pt2", 00:38:32.429 "base_bdev_name": "malloc2" 00:38:32.429 } 00:38:32.429 } 00:38:32.429 }' 00:38:32.429 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:32.429 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:38:32.429 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:38:32.429 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:32.429 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:38:32.688 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:38:32.688 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:32.688 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:38:32.688 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:38:32.688 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:32.688 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:38:32.946 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:38:32.946 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:38:32.946 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:33.205 [2024-07-13 11:50:07.706547] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d '!=' 9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d ']' 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:38:33.205 [2024-07-13 11:50:07.894410] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:33.205 11:50:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:33.464 11:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:33.464 "name": "raid_bdev1", 00:38:33.464 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:33.464 "strip_size_kb": 0, 00:38:33.464 "state": "online", 00:38:33.464 "raid_level": "raid1", 00:38:33.464 "superblock": true, 00:38:33.464 "num_base_bdevs": 2, 00:38:33.464 "num_base_bdevs_discovered": 1, 00:38:33.464 "num_base_bdevs_operational": 1, 00:38:33.464 "base_bdevs_list": [ 00:38:33.464 { 00:38:33.464 "name": null, 00:38:33.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:33.464 "is_configured": false, 00:38:33.464 "data_offset": 256, 00:38:33.464 "data_size": 7936 00:38:33.464 }, 00:38:33.464 { 00:38:33.464 "name": "pt2", 00:38:33.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:33.464 "is_configured": true, 00:38:33.464 "data_offset": 256, 00:38:33.464 "data_size": 7936 00:38:33.464 } 00:38:33.464 ] 00:38:33.464 }' 00:38:33.464 11:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:33.464 11:50:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:34.400 11:50:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:34.400 [2024-07-13 11:50:08.990680] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:34.400 [2024-07-13 11:50:08.990845] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:34.400 [2024-07-13 11:50:08.991048] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:34.400 [2024-07-13 11:50:08.991196] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:34.400 [2024-07-13 11:50:08.991294] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:38:34.400 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:34.400 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:38:34.668 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:38:34.668 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:38:34.668 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:38:34.668 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:34.668 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:38:34.927 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:38:34.927 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:38:34.927 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:38:34.927 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:38:34.927 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:38:34.927 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:38:35.186 [2024-07-13 11:50:09.718711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:38:35.186 [2024-07-13 11:50:09.719346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:35.186 [2024-07-13 11:50:09.719620] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:38:35.186 [2024-07-13 11:50:09.719858] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:35.186 [2024-07-13 11:50:09.721893] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:35.186 [2024-07-13 11:50:09.722176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:38:35.186 [2024-07-13 11:50:09.722450] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:38:35.186 [2024-07-13 11:50:09.722615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:35.186 [2024-07-13 11:50:09.722802] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:38:35.186 [2024-07-13 11:50:09.722925] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:35.186 [2024-07-13 11:50:09.723041] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:35.186 [2024-07-13 11:50:09.723249] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:38:35.186 [2024-07-13 11:50:09.723374] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:38:35.186 [2024-07-13 11:50:09.723546] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:35.186 pt2 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:35.186 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:35.445 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:35.445 "name": "raid_bdev1", 00:38:35.445 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:35.445 "strip_size_kb": 0, 00:38:35.445 "state": "online", 00:38:35.445 "raid_level": "raid1", 00:38:35.445 "superblock": true, 00:38:35.445 "num_base_bdevs": 2, 00:38:35.445 "num_base_bdevs_discovered": 1, 00:38:35.445 "num_base_bdevs_operational": 1, 00:38:35.445 "base_bdevs_list": [ 00:38:35.445 { 00:38:35.445 "name": null, 00:38:35.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:35.445 "is_configured": false, 00:38:35.445 "data_offset": 256, 00:38:35.445 "data_size": 7936 00:38:35.445 }, 00:38:35.445 { 00:38:35.445 "name": "pt2", 00:38:35.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:35.445 "is_configured": true, 00:38:35.445 "data_offset": 256, 00:38:35.445 "data_size": 7936 00:38:35.445 } 00:38:35.445 ] 00:38:35.445 }' 00:38:35.445 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:35.445 11:50:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:36.012 11:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:36.271 [2024-07-13 11:50:10.891539] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:36.271 [2024-07-13 11:50:10.891683] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:36.271 [2024-07-13 11:50:10.891845] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:36.271 [2024-07-13 11:50:10.892029] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:36.271 [2024-07-13 11:50:10.892131] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:38:36.271 11:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:36.271 11:50:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:38:36.530 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:38:36.530 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:38:36.530 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:38:36.530 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:38:36.789 [2024-07-13 11:50:11.371609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:38:36.789 [2024-07-13 11:50:11.372252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:36.789 [2024-07-13 11:50:11.372513] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:38:36.789 [2024-07-13 11:50:11.372739] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:36.789 [2024-07-13 11:50:11.374582] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:36.789 [2024-07-13 11:50:11.374862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:38:36.789 [2024-07-13 11:50:11.375178] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:38:36.789 [2024-07-13 11:50:11.375383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:38:36.789 [2024-07-13 11:50:11.375620] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:38:36.789 [2024-07-13 11:50:11.375729] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:36.789 [2024-07-13 11:50:11.375773] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:38:36.789 [2024-07-13 11:50:11.376066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:38:36.789 [2024-07-13 11:50:11.376316] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:38:36.789 pt1 00:38:36.789 [2024-07-13 11:50:11.376426] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:36.789 [2024-07-13 11:50:11.376591] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:38:36.789 [2024-07-13 11:50:11.376700] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:38:36.789 [2024-07-13 11:50:11.376739] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:38:36.789 [2024-07-13 11:50:11.376897] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:36.789 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:37.049 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:37.049 "name": "raid_bdev1", 00:38:37.049 "uuid": "9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d", 00:38:37.049 "strip_size_kb": 0, 00:38:37.049 "state": "online", 00:38:37.049 "raid_level": "raid1", 00:38:37.049 "superblock": true, 00:38:37.049 "num_base_bdevs": 2, 00:38:37.049 "num_base_bdevs_discovered": 1, 00:38:37.049 "num_base_bdevs_operational": 1, 00:38:37.049 "base_bdevs_list": [ 00:38:37.049 { 00:38:37.049 "name": null, 00:38:37.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:37.049 "is_configured": false, 00:38:37.049 "data_offset": 256, 00:38:37.049 "data_size": 7936 00:38:37.049 }, 00:38:37.049 { 00:38:37.049 "name": "pt2", 00:38:37.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:38:37.049 "is_configured": true, 00:38:37.049 "data_offset": 256, 00:38:37.049 "data_size": 7936 00:38:37.049 } 00:38:37.049 ] 00:38:37.049 }' 00:38:37.049 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:37.049 11:50:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:37.616 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:38:37.616 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:38:37.875 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:38:37.875 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:37.875 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:38:38.134 [2024-07-13 11:50:12.724094] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d '!=' 9e952ecf-7ddd-4992-b6b2-e1ef6f9e879d ']' 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 164439 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 164439 ']' 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 164439 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164439 00:38:38.134 killing process with pid 164439 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164439' 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 164439 00:38:38.134 11:50:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 164439 00:38:38.134 [2024-07-13 11:50:12.755605] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:38:38.134 [2024-07-13 11:50:12.755662] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:38.134 [2024-07-13 11:50:12.755701] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:38.134 [2024-07-13 11:50:12.755711] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:38:38.392 [2024-07-13 11:50:12.891399] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:38:39.327 ************************************ 00:38:39.327 END TEST raid_superblock_test_md_interleaved 00:38:39.327 ************************************ 00:38:39.327 11:50:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:38:39.327 00:38:39.327 real 0m16.389s 00:38:39.327 user 0m30.212s 00:38:39.327 sys 0m1.845s 00:38:39.327 11:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:38:39.327 11:50:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:39.327 11:50:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:38:39.327 11:50:13 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:38:39.327 11:50:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:38:39.327 11:50:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:38:39.327 11:50:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:38:39.327 ************************************ 00:38:39.327 START TEST raid_rebuild_test_sb_md_interleaved 00:38:39.327 ************************************ 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:38:39.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=164992 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 164992 /var/tmp/spdk-raid.sock 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 164992 ']' 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:38:39.327 11:50:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:39.327 [2024-07-13 11:50:14.056528] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:38:39.327 [2024-07-13 11:50:14.056927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164992 ] 00:38:39.327 I/O size of 3145728 is greater than zero copy threshold (65536). 00:38:39.327 Zero copy mechanism will not be used. 00:38:39.585 [2024-07-13 11:50:14.228604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.843 [2024-07-13 11:50:14.434052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.101 [2024-07-13 11:50:14.624492] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:38:40.359 11:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:38:40.359 11:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:38:40.359 11:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:40.359 11:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:38:40.618 BaseBdev1_malloc 00:38:40.618 11:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:38:40.876 [2024-07-13 11:50:15.485857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:38:40.876 [2024-07-13 11:50:15.486106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:40.876 [2024-07-13 11:50:15.486255] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:38:40.876 [2024-07-13 11:50:15.486369] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:40.876 [2024-07-13 11:50:15.488141] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:40.876 [2024-07-13 11:50:15.488292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:38:40.876 BaseBdev1 00:38:40.876 11:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:38:40.876 11:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:38:41.134 BaseBdev2_malloc 00:38:41.134 11:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:38:41.392 [2024-07-13 11:50:15.914542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:38:41.392 [2024-07-13 11:50:15.914788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:41.392 [2024-07-13 11:50:15.914979] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:38:41.392 [2024-07-13 11:50:15.915143] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:41.392 [2024-07-13 11:50:15.916915] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:41.392 [2024-07-13 11:50:15.917064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:38:41.392 BaseBdev2 00:38:41.392 11:50:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:38:41.392 spare_malloc 00:38:41.650 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:38:41.650 spare_delay 00:38:41.650 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:41.909 [2024-07-13 11:50:16.526865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:41.909 [2024-07-13 11:50:16.527093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:41.909 [2024-07-13 11:50:16.527156] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:38:41.909 [2024-07-13 11:50:16.527434] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:41.909 [2024-07-13 11:50:16.529361] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:41.909 [2024-07-13 11:50:16.529523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:41.909 spare 00:38:41.909 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:38:42.168 [2024-07-13 11:50:16.722977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:38:42.168 [2024-07-13 11:50:16.724641] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:42.168 [2024-07-13 11:50:16.724973] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:38:42.168 [2024-07-13 11:50:16.725085] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:42.168 [2024-07-13 11:50:16.725221] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:38:42.168 [2024-07-13 11:50:16.725388] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:38:42.168 [2024-07-13 11:50:16.725478] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:38:42.168 [2024-07-13 11:50:16.725641] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:42.168 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:42.427 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:42.427 "name": "raid_bdev1", 00:38:42.427 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:42.427 "strip_size_kb": 0, 00:38:42.427 "state": "online", 00:38:42.427 "raid_level": "raid1", 00:38:42.427 "superblock": true, 00:38:42.427 "num_base_bdevs": 2, 00:38:42.427 "num_base_bdevs_discovered": 2, 00:38:42.427 "num_base_bdevs_operational": 2, 00:38:42.427 "base_bdevs_list": [ 00:38:42.427 { 00:38:42.427 "name": "BaseBdev1", 00:38:42.427 "uuid": "24d54143-03d2-5043-93a6-14737194d9e4", 00:38:42.427 "is_configured": true, 00:38:42.427 "data_offset": 256, 00:38:42.427 "data_size": 7936 00:38:42.427 }, 00:38:42.427 { 00:38:42.427 "name": "BaseBdev2", 00:38:42.427 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:42.427 "is_configured": true, 00:38:42.427 "data_offset": 256, 00:38:42.427 "data_size": 7936 00:38:42.427 } 00:38:42.427 ] 00:38:42.427 }' 00:38:42.427 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:42.427 11:50:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:42.994 11:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:38:42.994 11:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:38:43.253 [2024-07-13 11:50:17.763438] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:38:43.253 11:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:38:43.253 11:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:43.253 11:50:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:38:43.511 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:38:43.511 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:38:43.511 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:38:43.511 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:38:43.770 [2024-07-13 11:50:18.283281] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:43.770 "name": "raid_bdev1", 00:38:43.770 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:43.770 "strip_size_kb": 0, 00:38:43.770 "state": "online", 00:38:43.770 "raid_level": "raid1", 00:38:43.770 "superblock": true, 00:38:43.770 "num_base_bdevs": 2, 00:38:43.770 "num_base_bdevs_discovered": 1, 00:38:43.770 "num_base_bdevs_operational": 1, 00:38:43.770 "base_bdevs_list": [ 00:38:43.770 { 00:38:43.770 "name": null, 00:38:43.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:43.770 "is_configured": false, 00:38:43.770 "data_offset": 256, 00:38:43.770 "data_size": 7936 00:38:43.770 }, 00:38:43.770 { 00:38:43.770 "name": "BaseBdev2", 00:38:43.770 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:43.770 "is_configured": true, 00:38:43.770 "data_offset": 256, 00:38:43.770 "data_size": 7936 00:38:43.770 } 00:38:43.770 ] 00:38:43.770 }' 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:43.770 11:50:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:44.706 11:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:44.706 [2024-07-13 11:50:19.299520] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:44.706 [2024-07-13 11:50:19.312089] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:38:44.706 [2024-07-13 11:50:19.313836] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:44.706 11:50:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:38:45.641 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:45.641 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:45.641 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:45.641 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:45.641 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:45.641 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:45.641 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:45.900 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:45.900 "name": "raid_bdev1", 00:38:45.900 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:45.900 "strip_size_kb": 0, 00:38:45.900 "state": "online", 00:38:45.900 "raid_level": "raid1", 00:38:45.900 "superblock": true, 00:38:45.900 "num_base_bdevs": 2, 00:38:45.900 "num_base_bdevs_discovered": 2, 00:38:45.900 "num_base_bdevs_operational": 2, 00:38:45.900 "process": { 00:38:45.900 "type": "rebuild", 00:38:45.900 "target": "spare", 00:38:45.900 "progress": { 00:38:45.900 "blocks": 3072, 00:38:45.900 "percent": 38 00:38:45.900 } 00:38:45.900 }, 00:38:45.900 "base_bdevs_list": [ 00:38:45.900 { 00:38:45.900 "name": "spare", 00:38:45.900 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:45.900 "is_configured": true, 00:38:45.900 "data_offset": 256, 00:38:45.900 "data_size": 7936 00:38:45.900 }, 00:38:45.900 { 00:38:45.900 "name": "BaseBdev2", 00:38:45.900 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:45.900 "is_configured": true, 00:38:45.900 "data_offset": 256, 00:38:45.900 "data_size": 7936 00:38:45.900 } 00:38:45.900 ] 00:38:45.900 }' 00:38:45.900 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:45.900 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:45.900 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:46.159 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:46.159 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:46.417 [2024-07-13 11:50:20.923743] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:46.417 [2024-07-13 11:50:20.924300] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:46.417 [2024-07-13 11:50:20.924493] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:46.417 [2024-07-13 11:50:20.924542] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:46.417 [2024-07-13 11:50:20.924674] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:46.417 11:50:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:46.676 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:46.676 "name": "raid_bdev1", 00:38:46.676 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:46.676 "strip_size_kb": 0, 00:38:46.676 "state": "online", 00:38:46.676 "raid_level": "raid1", 00:38:46.676 "superblock": true, 00:38:46.676 "num_base_bdevs": 2, 00:38:46.676 "num_base_bdevs_discovered": 1, 00:38:46.676 "num_base_bdevs_operational": 1, 00:38:46.676 "base_bdevs_list": [ 00:38:46.676 { 00:38:46.676 "name": null, 00:38:46.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:46.676 "is_configured": false, 00:38:46.676 "data_offset": 256, 00:38:46.676 "data_size": 7936 00:38:46.676 }, 00:38:46.676 { 00:38:46.676 "name": "BaseBdev2", 00:38:46.676 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:46.676 "is_configured": true, 00:38:46.676 "data_offset": 256, 00:38:46.676 "data_size": 7936 00:38:46.676 } 00:38:46.676 ] 00:38:46.676 }' 00:38:46.676 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:46.676 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:47.243 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:47.243 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:47.243 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:47.243 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:47.243 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:47.243 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:47.243 11:50:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:47.502 11:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:47.502 "name": "raid_bdev1", 00:38:47.502 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:47.502 "strip_size_kb": 0, 00:38:47.502 "state": "online", 00:38:47.502 "raid_level": "raid1", 00:38:47.502 "superblock": true, 00:38:47.502 "num_base_bdevs": 2, 00:38:47.502 "num_base_bdevs_discovered": 1, 00:38:47.502 "num_base_bdevs_operational": 1, 00:38:47.502 "base_bdevs_list": [ 00:38:47.502 { 00:38:47.502 "name": null, 00:38:47.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:47.502 "is_configured": false, 00:38:47.502 "data_offset": 256, 00:38:47.502 "data_size": 7936 00:38:47.502 }, 00:38:47.502 { 00:38:47.502 "name": "BaseBdev2", 00:38:47.502 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:47.502 "is_configured": true, 00:38:47.502 "data_offset": 256, 00:38:47.502 "data_size": 7936 00:38:47.502 } 00:38:47.502 ] 00:38:47.502 }' 00:38:47.502 11:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:47.502 11:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:47.502 11:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:47.502 11:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:47.502 11:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:47.760 [2024-07-13 11:50:22.407220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:47.760 [2024-07-13 11:50:22.417299] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:38:47.760 [2024-07-13 11:50:22.419069] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:47.760 11:50:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:38:48.693 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:48.693 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:48.693 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:48.693 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:48.693 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:48.693 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:48.693 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:48.954 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:48.954 "name": "raid_bdev1", 00:38:48.954 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:48.954 "strip_size_kb": 0, 00:38:48.954 "state": "online", 00:38:48.954 "raid_level": "raid1", 00:38:48.954 "superblock": true, 00:38:48.954 "num_base_bdevs": 2, 00:38:48.954 "num_base_bdevs_discovered": 2, 00:38:48.954 "num_base_bdevs_operational": 2, 00:38:48.954 "process": { 00:38:48.954 "type": "rebuild", 00:38:48.954 "target": "spare", 00:38:48.954 "progress": { 00:38:48.954 "blocks": 2816, 00:38:48.954 "percent": 35 00:38:48.954 } 00:38:48.954 }, 00:38:48.954 "base_bdevs_list": [ 00:38:48.954 { 00:38:48.954 "name": "spare", 00:38:48.954 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:48.954 "is_configured": true, 00:38:48.954 "data_offset": 256, 00:38:48.954 "data_size": 7936 00:38:48.954 }, 00:38:48.954 { 00:38:48.954 "name": "BaseBdev2", 00:38:48.954 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:48.954 "is_configured": true, 00:38:48.954 "data_offset": 256, 00:38:48.954 "data_size": 7936 00:38:48.954 } 00:38:48.954 ] 00:38:48.954 }' 00:38:48.954 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:48.954 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:48.954 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:49.226 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:49.226 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:38:49.226 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:38:49.226 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:38:49.226 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:38:49.226 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:38:49.226 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:38:49.226 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1474 00:38:49.226 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:49.227 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:49.227 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:49.227 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:49.227 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:49.227 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:49.227 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:49.227 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:49.227 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:49.227 "name": "raid_bdev1", 00:38:49.227 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:49.227 "strip_size_kb": 0, 00:38:49.227 "state": "online", 00:38:49.227 "raid_level": "raid1", 00:38:49.227 "superblock": true, 00:38:49.227 "num_base_bdevs": 2, 00:38:49.227 "num_base_bdevs_discovered": 2, 00:38:49.227 "num_base_bdevs_operational": 2, 00:38:49.227 "process": { 00:38:49.227 "type": "rebuild", 00:38:49.227 "target": "spare", 00:38:49.227 "progress": { 00:38:49.227 "blocks": 3584, 00:38:49.227 "percent": 45 00:38:49.227 } 00:38:49.227 }, 00:38:49.227 "base_bdevs_list": [ 00:38:49.227 { 00:38:49.227 "name": "spare", 00:38:49.227 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:49.227 "is_configured": true, 00:38:49.227 "data_offset": 256, 00:38:49.227 "data_size": 7936 00:38:49.227 }, 00:38:49.227 { 00:38:49.227 "name": "BaseBdev2", 00:38:49.227 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:49.227 "is_configured": true, 00:38:49.227 "data_offset": 256, 00:38:49.227 "data_size": 7936 00:38:49.227 } 00:38:49.227 ] 00:38:49.227 }' 00:38:49.227 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:49.500 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:49.500 11:50:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:49.500 11:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:49.500 11:50:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:50.435 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:50.435 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:50.435 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:50.435 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:50.435 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:50.435 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:50.435 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:50.435 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:50.694 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:50.694 "name": "raid_bdev1", 00:38:50.694 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:50.694 "strip_size_kb": 0, 00:38:50.694 "state": "online", 00:38:50.694 "raid_level": "raid1", 00:38:50.694 "superblock": true, 00:38:50.694 "num_base_bdevs": 2, 00:38:50.694 "num_base_bdevs_discovered": 2, 00:38:50.694 "num_base_bdevs_operational": 2, 00:38:50.694 "process": { 00:38:50.694 "type": "rebuild", 00:38:50.694 "target": "spare", 00:38:50.694 "progress": { 00:38:50.694 "blocks": 6912, 00:38:50.694 "percent": 87 00:38:50.694 } 00:38:50.694 }, 00:38:50.694 "base_bdevs_list": [ 00:38:50.694 { 00:38:50.694 "name": "spare", 00:38:50.694 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:50.694 "is_configured": true, 00:38:50.694 "data_offset": 256, 00:38:50.694 "data_size": 7936 00:38:50.694 }, 00:38:50.694 { 00:38:50.694 "name": "BaseBdev2", 00:38:50.694 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:50.694 "is_configured": true, 00:38:50.694 "data_offset": 256, 00:38:50.694 "data_size": 7936 00:38:50.694 } 00:38:50.694 ] 00:38:50.694 }' 00:38:50.694 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:50.694 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:50.694 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:50.694 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:50.694 11:50:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:38:50.952 [2024-07-13 11:50:25.537796] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:38:50.952 [2024-07-13 11:50:25.538014] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:38:50.952 [2024-07-13 11:50:25.538267] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:51.887 "name": "raid_bdev1", 00:38:51.887 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:51.887 "strip_size_kb": 0, 00:38:51.887 "state": "online", 00:38:51.887 "raid_level": "raid1", 00:38:51.887 "superblock": true, 00:38:51.887 "num_base_bdevs": 2, 00:38:51.887 "num_base_bdevs_discovered": 2, 00:38:51.887 "num_base_bdevs_operational": 2, 00:38:51.887 "base_bdevs_list": [ 00:38:51.887 { 00:38:51.887 "name": "spare", 00:38:51.887 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:51.887 "is_configured": true, 00:38:51.887 "data_offset": 256, 00:38:51.887 "data_size": 7936 00:38:51.887 }, 00:38:51.887 { 00:38:51.887 "name": "BaseBdev2", 00:38:51.887 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:51.887 "is_configured": true, 00:38:51.887 "data_offset": 256, 00:38:51.887 "data_size": 7936 00:38:51.887 } 00:38:51.887 ] 00:38:51.887 }' 00:38:51.887 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:52.145 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:52.403 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:52.403 "name": "raid_bdev1", 00:38:52.403 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:52.403 "strip_size_kb": 0, 00:38:52.403 "state": "online", 00:38:52.403 "raid_level": "raid1", 00:38:52.403 "superblock": true, 00:38:52.403 "num_base_bdevs": 2, 00:38:52.403 "num_base_bdevs_discovered": 2, 00:38:52.403 "num_base_bdevs_operational": 2, 00:38:52.403 "base_bdevs_list": [ 00:38:52.403 { 00:38:52.403 "name": "spare", 00:38:52.403 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:52.403 "is_configured": true, 00:38:52.403 "data_offset": 256, 00:38:52.403 "data_size": 7936 00:38:52.403 }, 00:38:52.403 { 00:38:52.403 "name": "BaseBdev2", 00:38:52.403 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:52.403 "is_configured": true, 00:38:52.403 "data_offset": 256, 00:38:52.403 "data_size": 7936 00:38:52.403 } 00:38:52.403 ] 00:38:52.403 }' 00:38:52.403 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:52.403 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:52.403 11:50:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:52.403 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:52.404 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:52.404 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:52.404 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:52.662 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:52.662 "name": "raid_bdev1", 00:38:52.662 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:52.662 "strip_size_kb": 0, 00:38:52.662 "state": "online", 00:38:52.662 "raid_level": "raid1", 00:38:52.662 "superblock": true, 00:38:52.662 "num_base_bdevs": 2, 00:38:52.662 "num_base_bdevs_discovered": 2, 00:38:52.662 "num_base_bdevs_operational": 2, 00:38:52.662 "base_bdevs_list": [ 00:38:52.662 { 00:38:52.662 "name": "spare", 00:38:52.662 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:52.662 "is_configured": true, 00:38:52.662 "data_offset": 256, 00:38:52.662 "data_size": 7936 00:38:52.662 }, 00:38:52.662 { 00:38:52.662 "name": "BaseBdev2", 00:38:52.662 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:52.662 "is_configured": true, 00:38:52.662 "data_offset": 256, 00:38:52.662 "data_size": 7936 00:38:52.662 } 00:38:52.662 ] 00:38:52.662 }' 00:38:52.662 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:52.662 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:53.227 11:50:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:38:53.485 [2024-07-13 11:50:28.144387] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:38:53.485 [2024-07-13 11:50:28.144532] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:38:53.485 [2024-07-13 11:50:28.144713] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:38:53.485 [2024-07-13 11:50:28.144891] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:38:53.485 [2024-07-13 11:50:28.144986] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:38:53.485 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:53.485 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:38:53.742 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:38:53.742 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:38:53.742 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:38:53.742 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:54.001 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:54.259 [2024-07-13 11:50:28.806390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:54.259 [2024-07-13 11:50:28.806580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:54.259 [2024-07-13 11:50:28.806663] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:38:54.259 [2024-07-13 11:50:28.806825] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:54.259 [2024-07-13 11:50:28.808722] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:54.259 [2024-07-13 11:50:28.808881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:54.259 [2024-07-13 11:50:28.809051] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:54.259 [2024-07-13 11:50:28.809222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:54.259 [2024-07-13 11:50:28.809479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:38:54.259 spare 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:54.259 11:50:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:54.259 [2024-07-13 11:50:28.909709] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:38:54.259 [2024-07-13 11:50:28.909838] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:38:54.259 [2024-07-13 11:50:28.909996] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:38:54.259 [2024-07-13 11:50:28.910313] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:38:54.259 [2024-07-13 11:50:28.910411] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:38:54.259 [2024-07-13 11:50:28.910561] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:54.517 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:54.517 "name": "raid_bdev1", 00:38:54.517 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:54.517 "strip_size_kb": 0, 00:38:54.517 "state": "online", 00:38:54.517 "raid_level": "raid1", 00:38:54.517 "superblock": true, 00:38:54.517 "num_base_bdevs": 2, 00:38:54.517 "num_base_bdevs_discovered": 2, 00:38:54.517 "num_base_bdevs_operational": 2, 00:38:54.517 "base_bdevs_list": [ 00:38:54.517 { 00:38:54.517 "name": "spare", 00:38:54.517 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:54.517 "is_configured": true, 00:38:54.517 "data_offset": 256, 00:38:54.517 "data_size": 7936 00:38:54.517 }, 00:38:54.517 { 00:38:54.517 "name": "BaseBdev2", 00:38:54.517 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:54.517 "is_configured": true, 00:38:54.517 "data_offset": 256, 00:38:54.517 "data_size": 7936 00:38:54.517 } 00:38:54.517 ] 00:38:54.517 }' 00:38:54.517 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:54.517 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:55.083 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:38:55.083 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:55.083 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:38:55.083 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:38:55.083 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:55.083 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:55.083 11:50:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:55.340 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:55.340 "name": "raid_bdev1", 00:38:55.340 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:55.340 "strip_size_kb": 0, 00:38:55.340 "state": "online", 00:38:55.340 "raid_level": "raid1", 00:38:55.340 "superblock": true, 00:38:55.340 "num_base_bdevs": 2, 00:38:55.340 "num_base_bdevs_discovered": 2, 00:38:55.340 "num_base_bdevs_operational": 2, 00:38:55.340 "base_bdevs_list": [ 00:38:55.340 { 00:38:55.340 "name": "spare", 00:38:55.340 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:55.340 "is_configured": true, 00:38:55.340 "data_offset": 256, 00:38:55.340 "data_size": 7936 00:38:55.340 }, 00:38:55.340 { 00:38:55.340 "name": "BaseBdev2", 00:38:55.340 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:55.340 "is_configured": true, 00:38:55.340 "data_offset": 256, 00:38:55.340 "data_size": 7936 00:38:55.340 } 00:38:55.340 ] 00:38:55.340 }' 00:38:55.340 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:55.340 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:38:55.340 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:55.598 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:38:55.598 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:55.598 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:38:55.598 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:38:55.598 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:38:55.856 [2024-07-13 11:50:30.572385] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:55.856 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:56.114 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:56.114 "name": "raid_bdev1", 00:38:56.114 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:56.114 "strip_size_kb": 0, 00:38:56.114 "state": "online", 00:38:56.114 "raid_level": "raid1", 00:38:56.114 "superblock": true, 00:38:56.114 "num_base_bdevs": 2, 00:38:56.114 "num_base_bdevs_discovered": 1, 00:38:56.114 "num_base_bdevs_operational": 1, 00:38:56.114 "base_bdevs_list": [ 00:38:56.114 { 00:38:56.114 "name": null, 00:38:56.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:56.114 "is_configured": false, 00:38:56.114 "data_offset": 256, 00:38:56.114 "data_size": 7936 00:38:56.114 }, 00:38:56.114 { 00:38:56.114 "name": "BaseBdev2", 00:38:56.114 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:56.114 "is_configured": true, 00:38:56.114 "data_offset": 256, 00:38:56.114 "data_size": 7936 00:38:56.114 } 00:38:56.114 ] 00:38:56.114 }' 00:38:56.114 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:56.114 11:50:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:57.050 11:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:38:57.050 [2024-07-13 11:50:31.684558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:57.050 [2024-07-13 11:50:31.684824] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:57.050 [2024-07-13 11:50:31.684936] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:57.050 [2024-07-13 11:50:31.685036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:57.050 [2024-07-13 11:50:31.697267] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:38:57.050 [2024-07-13 11:50:31.699265] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:57.050 11:50:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:38:57.986 11:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:38:57.986 11:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:38:57.986 11:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:38:57.986 11:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:38:57.986 11:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:38:57.986 11:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:57.986 11:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:58.244 11:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:38:58.244 "name": "raid_bdev1", 00:38:58.244 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:58.244 "strip_size_kb": 0, 00:38:58.244 "state": "online", 00:38:58.244 "raid_level": "raid1", 00:38:58.244 "superblock": true, 00:38:58.244 "num_base_bdevs": 2, 00:38:58.244 "num_base_bdevs_discovered": 2, 00:38:58.244 "num_base_bdevs_operational": 2, 00:38:58.244 "process": { 00:38:58.244 "type": "rebuild", 00:38:58.244 "target": "spare", 00:38:58.244 "progress": { 00:38:58.244 "blocks": 3072, 00:38:58.244 "percent": 38 00:38:58.244 } 00:38:58.244 }, 00:38:58.244 "base_bdevs_list": [ 00:38:58.244 { 00:38:58.244 "name": "spare", 00:38:58.244 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:38:58.244 "is_configured": true, 00:38:58.244 "data_offset": 256, 00:38:58.244 "data_size": 7936 00:38:58.244 }, 00:38:58.244 { 00:38:58.244 "name": "BaseBdev2", 00:38:58.244 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:58.244 "is_configured": true, 00:38:58.244 "data_offset": 256, 00:38:58.244 "data_size": 7936 00:38:58.244 } 00:38:58.244 ] 00:38:58.244 }' 00:38:58.244 11:50:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:38:58.502 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:38:58.502 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:38:58.502 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:38:58.502 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:38:58.759 [2024-07-13 11:50:33.321042] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:58.759 [2024-07-13 11:50:33.410088] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:38:58.759 [2024-07-13 11:50:33.410279] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:38:58.759 [2024-07-13 11:50:33.410328] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:38:58.759 [2024-07-13 11:50:33.410466] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:38:58.759 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:38:58.759 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:38:58.759 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:38:58.759 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:38:58.760 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:38:58.760 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:38:58.760 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:38:58.760 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:38:58.760 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:38:58.760 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:38:58.760 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:38:58.760 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:38:59.018 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:38:59.018 "name": "raid_bdev1", 00:38:59.018 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:38:59.018 "strip_size_kb": 0, 00:38:59.018 "state": "online", 00:38:59.018 "raid_level": "raid1", 00:38:59.018 "superblock": true, 00:38:59.018 "num_base_bdevs": 2, 00:38:59.018 "num_base_bdevs_discovered": 1, 00:38:59.018 "num_base_bdevs_operational": 1, 00:38:59.018 "base_bdevs_list": [ 00:38:59.018 { 00:38:59.018 "name": null, 00:38:59.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:38:59.018 "is_configured": false, 00:38:59.018 "data_offset": 256, 00:38:59.018 "data_size": 7936 00:38:59.018 }, 00:38:59.018 { 00:38:59.018 "name": "BaseBdev2", 00:38:59.018 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:38:59.018 "is_configured": true, 00:38:59.018 "data_offset": 256, 00:38:59.018 "data_size": 7936 00:38:59.018 } 00:38:59.018 ] 00:38:59.018 }' 00:38:59.018 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:38:59.018 11:50:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:38:59.954 11:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:38:59.954 [2024-07-13 11:50:34.584610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:38:59.954 [2024-07-13 11:50:34.584804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:38:59.954 [2024-07-13 11:50:34.584874] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:38:59.954 [2024-07-13 11:50:34.585182] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:38:59.954 [2024-07-13 11:50:34.585476] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:38:59.954 [2024-07-13 11:50:34.585648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:38:59.954 [2024-07-13 11:50:34.585845] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:38:59.954 [2024-07-13 11:50:34.585942] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:38:59.954 [2024-07-13 11:50:34.586028] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:38:59.954 [2024-07-13 11:50:34.586118] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:38:59.954 [2024-07-13 11:50:34.597372] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:38:59.954 spare 00:38:59.954 [2024-07-13 11:50:34.599256] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:38:59.954 11:50:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:39:00.891 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:39:00.891 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:00.891 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:39:00.891 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:39:00.891 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:00.891 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:00.891 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.150 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:01.150 "name": "raid_bdev1", 00:39:01.150 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:39:01.150 "strip_size_kb": 0, 00:39:01.150 "state": "online", 00:39:01.150 "raid_level": "raid1", 00:39:01.150 "superblock": true, 00:39:01.150 "num_base_bdevs": 2, 00:39:01.150 "num_base_bdevs_discovered": 2, 00:39:01.150 "num_base_bdevs_operational": 2, 00:39:01.150 "process": { 00:39:01.150 "type": "rebuild", 00:39:01.150 "target": "spare", 00:39:01.150 "progress": { 00:39:01.150 "blocks": 3072, 00:39:01.150 "percent": 38 00:39:01.150 } 00:39:01.150 }, 00:39:01.150 "base_bdevs_list": [ 00:39:01.150 { 00:39:01.150 "name": "spare", 00:39:01.150 "uuid": "a996dcb7-1d82-51c4-abcc-5675eb591a03", 00:39:01.150 "is_configured": true, 00:39:01.150 "data_offset": 256, 00:39:01.150 "data_size": 7936 00:39:01.150 }, 00:39:01.150 { 00:39:01.150 "name": "BaseBdev2", 00:39:01.150 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:39:01.150 "is_configured": true, 00:39:01.150 "data_offset": 256, 00:39:01.150 "data_size": 7936 00:39:01.150 } 00:39:01.150 ] 00:39:01.150 }' 00:39:01.150 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:01.150 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:39:01.150 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:01.410 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:39:01.410 11:50:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:39:01.410 [2024-07-13 11:50:36.121355] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:01.669 [2024-07-13 11:50:36.209344] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:39:01.669 [2024-07-13 11:50:36.209534] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:39:01.669 [2024-07-13 11:50:36.209582] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:39:01.669 [2024-07-13 11:50:36.209715] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:01.669 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:01.928 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:01.928 "name": "raid_bdev1", 00:39:01.928 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:39:01.928 "strip_size_kb": 0, 00:39:01.928 "state": "online", 00:39:01.928 "raid_level": "raid1", 00:39:01.928 "superblock": true, 00:39:01.928 "num_base_bdevs": 2, 00:39:01.928 "num_base_bdevs_discovered": 1, 00:39:01.928 "num_base_bdevs_operational": 1, 00:39:01.928 "base_bdevs_list": [ 00:39:01.928 { 00:39:01.928 "name": null, 00:39:01.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:01.928 "is_configured": false, 00:39:01.928 "data_offset": 256, 00:39:01.928 "data_size": 7936 00:39:01.928 }, 00:39:01.928 { 00:39:01.928 "name": "BaseBdev2", 00:39:01.928 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:39:01.928 "is_configured": true, 00:39:01.928 "data_offset": 256, 00:39:01.928 "data_size": 7936 00:39:01.928 } 00:39:01.928 ] 00:39:01.928 }' 00:39:01.928 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:01.928 11:50:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:02.495 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:02.495 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:02.495 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:02.495 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:02.495 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:02.495 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:02.495 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:02.754 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:02.754 "name": "raid_bdev1", 00:39:02.754 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:39:02.754 "strip_size_kb": 0, 00:39:02.754 "state": "online", 00:39:02.754 "raid_level": "raid1", 00:39:02.754 "superblock": true, 00:39:02.754 "num_base_bdevs": 2, 00:39:02.754 "num_base_bdevs_discovered": 1, 00:39:02.754 "num_base_bdevs_operational": 1, 00:39:02.754 "base_bdevs_list": [ 00:39:02.754 { 00:39:02.754 "name": null, 00:39:02.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:02.754 "is_configured": false, 00:39:02.754 "data_offset": 256, 00:39:02.754 "data_size": 7936 00:39:02.754 }, 00:39:02.754 { 00:39:02.754 "name": "BaseBdev2", 00:39:02.754 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:39:02.754 "is_configured": true, 00:39:02.754 "data_offset": 256, 00:39:02.754 "data_size": 7936 00:39:02.754 } 00:39:02.754 ] 00:39:02.754 }' 00:39:02.754 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:02.754 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:02.754 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:02.754 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:02.754 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:39:03.322 11:50:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:39:03.322 [2024-07-13 11:50:38.008386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:39:03.322 [2024-07-13 11:50:38.008572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:39:03.322 [2024-07-13 11:50:38.008738] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:39:03.322 [2024-07-13 11:50:38.008860] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:39:03.322 [2024-07-13 11:50:38.009087] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:39:03.322 [2024-07-13 11:50:38.009231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:39:03.322 [2024-07-13 11:50:38.009407] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:39:03.322 [2024-07-13 11:50:38.009501] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:03.322 [2024-07-13 11:50:38.009594] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:03.322 BaseBdev1 00:39:03.322 11:50:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:04.698 "name": "raid_bdev1", 00:39:04.698 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:39:04.698 "strip_size_kb": 0, 00:39:04.698 "state": "online", 00:39:04.698 "raid_level": "raid1", 00:39:04.698 "superblock": true, 00:39:04.698 "num_base_bdevs": 2, 00:39:04.698 "num_base_bdevs_discovered": 1, 00:39:04.698 "num_base_bdevs_operational": 1, 00:39:04.698 "base_bdevs_list": [ 00:39:04.698 { 00:39:04.698 "name": null, 00:39:04.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:04.698 "is_configured": false, 00:39:04.698 "data_offset": 256, 00:39:04.698 "data_size": 7936 00:39:04.698 }, 00:39:04.698 { 00:39:04.698 "name": "BaseBdev2", 00:39:04.698 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:39:04.698 "is_configured": true, 00:39:04.698 "data_offset": 256, 00:39:04.698 "data_size": 7936 00:39:04.698 } 00:39:04.698 ] 00:39:04.698 }' 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:04.698 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:05.266 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:05.266 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:05.266 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:05.266 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:05.266 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:05.266 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:05.266 11:50:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:05.524 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:05.524 "name": "raid_bdev1", 00:39:05.524 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:39:05.524 "strip_size_kb": 0, 00:39:05.524 "state": "online", 00:39:05.524 "raid_level": "raid1", 00:39:05.524 "superblock": true, 00:39:05.524 "num_base_bdevs": 2, 00:39:05.524 "num_base_bdevs_discovered": 1, 00:39:05.524 "num_base_bdevs_operational": 1, 00:39:05.524 "base_bdevs_list": [ 00:39:05.524 { 00:39:05.524 "name": null, 00:39:05.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:05.524 "is_configured": false, 00:39:05.524 "data_offset": 256, 00:39:05.524 "data_size": 7936 00:39:05.524 }, 00:39:05.524 { 00:39:05.524 "name": "BaseBdev2", 00:39:05.524 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:39:05.524 "is_configured": true, 00:39:05.524 "data_offset": 256, 00:39:05.524 "data_size": 7936 00:39:05.524 } 00:39:05.524 ] 00:39:05.524 }' 00:39:05.524 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:05.524 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:05.524 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:05.782 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:39:05.783 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:39:06.041 [2024-07-13 11:50:40.540584] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:39:06.041 [2024-07-13 11:50:40.540877] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:39:06.041 [2024-07-13 11:50:40.540992] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:39:06.041 request: 00:39:06.041 { 00:39:06.041 "base_bdev": "BaseBdev1", 00:39:06.041 "raid_bdev": "raid_bdev1", 00:39:06.041 "method": "bdev_raid_add_base_bdev", 00:39:06.041 "req_id": 1 00:39:06.041 } 00:39:06.041 Got JSON-RPC error response 00:39:06.041 response: 00:39:06.041 { 00:39:06.041 "code": -22, 00:39:06.041 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:39:06.041 } 00:39:06.041 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:39:06.041 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:06.041 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:06.041 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:06.041 11:50:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:06.977 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:07.234 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:39:07.234 "name": "raid_bdev1", 00:39:07.234 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:39:07.234 "strip_size_kb": 0, 00:39:07.234 "state": "online", 00:39:07.234 "raid_level": "raid1", 00:39:07.234 "superblock": true, 00:39:07.234 "num_base_bdevs": 2, 00:39:07.234 "num_base_bdevs_discovered": 1, 00:39:07.234 "num_base_bdevs_operational": 1, 00:39:07.234 "base_bdevs_list": [ 00:39:07.234 { 00:39:07.234 "name": null, 00:39:07.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:07.234 "is_configured": false, 00:39:07.234 "data_offset": 256, 00:39:07.234 "data_size": 7936 00:39:07.234 }, 00:39:07.234 { 00:39:07.234 "name": "BaseBdev2", 00:39:07.234 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:39:07.234 "is_configured": true, 00:39:07.234 "data_offset": 256, 00:39:07.234 "data_size": 7936 00:39:07.234 } 00:39:07.234 ] 00:39:07.234 }' 00:39:07.234 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:39:07.234 11:50:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:07.799 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:39:07.799 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:39:07.799 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:39:07.799 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:39:07.799 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:39:07.799 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:39:07.799 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:39:08.058 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:39:08.058 "name": "raid_bdev1", 00:39:08.058 "uuid": "3f47949c-cd99-49ff-a20e-45dcdcdf2c88", 00:39:08.058 "strip_size_kb": 0, 00:39:08.058 "state": "online", 00:39:08.058 "raid_level": "raid1", 00:39:08.058 "superblock": true, 00:39:08.058 "num_base_bdevs": 2, 00:39:08.058 "num_base_bdevs_discovered": 1, 00:39:08.058 "num_base_bdevs_operational": 1, 00:39:08.058 "base_bdevs_list": [ 00:39:08.058 { 00:39:08.058 "name": null, 00:39:08.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:39:08.058 "is_configured": false, 00:39:08.058 "data_offset": 256, 00:39:08.058 "data_size": 7936 00:39:08.058 }, 00:39:08.058 { 00:39:08.058 "name": "BaseBdev2", 00:39:08.058 "uuid": "555f97b0-a40a-5a96-9e0e-d46e6746e0f4", 00:39:08.058 "is_configured": true, 00:39:08.058 "data_offset": 256, 00:39:08.058 "data_size": 7936 00:39:08.058 } 00:39:08.058 ] 00:39:08.058 }' 00:39:08.058 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:39:08.058 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:39:08.058 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 164992 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 164992 ']' 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 164992 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164992 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164992' 00:39:08.317 killing process with pid 164992 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 164992 00:39:08.317 Received shutdown signal, test time was about 60.000000 seconds 00:39:08.317 00:39:08.317 Latency(us) 00:39:08.317 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.317 =================================================================================================================== 00:39:08.317 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:08.317 11:50:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 164992 00:39:08.317 [2024-07-13 11:50:42.891794] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:39:08.317 [2024-07-13 11:50:42.892106] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:39:08.317 [2024-07-13 11:50:42.892253] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:39:08.317 [2024-07-13 11:50:42.892340] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:39:08.575 [2024-07-13 11:50:43.099113] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:39:09.510 11:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:39:09.510 00:39:09.510 real 0m30.130s 00:39:09.510 user 0m49.215s 00:39:09.510 sys 0m2.434s 00:39:09.510 11:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:09.510 11:50:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:39:09.510 ************************************ 00:39:09.510 END TEST raid_rebuild_test_sb_md_interleaved 00:39:09.510 ************************************ 00:39:09.510 11:50:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:39:09.510 11:50:44 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:39:09.510 11:50:44 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:39:09.510 11:50:44 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 164992 ']' 00:39:09.510 11:50:44 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 164992 00:39:09.510 11:50:44 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:39:09.510 ************************************ 00:39:09.510 END TEST bdev_raid 00:39:09.510 ************************************ 00:39:09.510 00:39:09.510 real 24m25.052s 00:39:09.510 user 42m10.265s 00:39:09.510 sys 2m41.766s 00:39:09.510 11:50:44 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:09.510 11:50:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:39:09.510 11:50:44 -- common/autotest_common.sh@1142 -- # return 0 00:39:09.510 11:50:44 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:39:09.510 11:50:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:09.510 11:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:09.510 11:50:44 -- common/autotest_common.sh@10 -- # set +x 00:39:09.510 ************************************ 00:39:09.510 START TEST bdevperf_config 00:39:09.510 ************************************ 00:39:09.510 11:50:44 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:39:09.768 * Looking for test storage... 00:39:09.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:09.768 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:39:09.768 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:09.768 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:09.768 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:09.768 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:09.768 11:50:44 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:13.954 11:50:48 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-13 11:50:44.416280] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:13.954 [2024-07-13 11:50:44.416490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165859 ] 00:39:13.954 Using job config with 4 jobs 00:39:13.954 [2024-07-13 11:50:44.597099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.954 [2024-07-13 11:50:44.809028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.954 cpumask for '\''job0'\'' is too big 00:39:13.954 cpumask for '\''job1'\'' is too big 00:39:13.954 cpumask for '\''job2'\'' is too big 00:39:13.954 cpumask for '\''job3'\'' is too big 00:39:13.954 Running I/O for 2 seconds... 00:39:13.954 00:39:13.954 Latency(us) 00:39:13.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.01 31966.78 31.22 0.00 0.00 8006.71 1504.35 12451.84 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.02 31976.19 31.23 0.00 0.00 7989.80 1422.43 10962.39 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.02 31954.34 31.21 0.00 0.00 7982.06 1474.56 9472.93 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.02 31932.84 31.18 0.00 0.00 7973.74 1444.77 9055.88 00:39:13.954 =================================================================================================================== 00:39:13.954 Total : 127830.14 124.83 0.00 0.00 7988.06 1422.43 12451.84' 00:39:13.954 11:50:48 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-13 11:50:44.416280] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:13.954 [2024-07-13 11:50:44.416490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165859 ] 00:39:13.954 Using job config with 4 jobs 00:39:13.954 [2024-07-13 11:50:44.597099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.954 [2024-07-13 11:50:44.809028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.954 cpumask for '\''job0'\'' is too big 00:39:13.954 cpumask for '\''job1'\'' is too big 00:39:13.954 cpumask for '\''job2'\'' is too big 00:39:13.954 cpumask for '\''job3'\'' is too big 00:39:13.954 Running I/O for 2 seconds... 00:39:13.954 00:39:13.954 Latency(us) 00:39:13.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.01 31966.78 31.22 0.00 0.00 8006.71 1504.35 12451.84 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.02 31976.19 31.23 0.00 0.00 7989.80 1422.43 10962.39 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.02 31954.34 31.21 0.00 0.00 7982.06 1474.56 9472.93 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.02 31932.84 31.18 0.00 0.00 7973.74 1444.77 9055.88 00:39:13.954 =================================================================================================================== 00:39:13.954 Total : 127830.14 124.83 0.00 0.00 7988.06 1422.43 12451.84' 00:39:13.954 11:50:48 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-13 11:50:44.416280] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:13.954 [2024-07-13 11:50:44.416490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165859 ] 00:39:13.954 Using job config with 4 jobs 00:39:13.954 [2024-07-13 11:50:44.597099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.954 [2024-07-13 11:50:44.809028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.954 cpumask for '\''job0'\'' is too big 00:39:13.954 cpumask for '\''job1'\'' is too big 00:39:13.954 cpumask for '\''job2'\'' is too big 00:39:13.954 cpumask for '\''job3'\'' is too big 00:39:13.954 Running I/O for 2 seconds... 00:39:13.954 00:39:13.954 Latency(us) 00:39:13.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.01 31966.78 31.22 0.00 0.00 8006.71 1504.35 12451.84 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.02 31976.19 31.23 0.00 0.00 7989.80 1422.43 10962.39 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.02 31954.34 31.21 0.00 0.00 7982.06 1474.56 9472.93 00:39:13.954 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:13.954 Malloc0 : 2.02 31932.84 31.18 0.00 0.00 7973.74 1444.77 9055.88 00:39:13.954 =================================================================================================================== 00:39:13.954 Total : 127830.14 124.83 0.00 0.00 7988.06 1422.43 12451.84' 00:39:13.954 11:50:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:39:13.954 11:50:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:39:13.954 11:50:48 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:39:13.954 11:50:48 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:13.954 [2024-07-13 11:50:48.584009] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:13.954 [2024-07-13 11:50:48.584958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165933 ] 00:39:14.212 [2024-07-13 11:50:48.753857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.470 [2024-07-13 11:50:48.983436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.728 cpumask for 'job0' is too big 00:39:14.728 cpumask for 'job1' is too big 00:39:14.728 cpumask for 'job2' is too big 00:39:14.728 cpumask for 'job3' is too big 00:39:18.013 11:50:52 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:39:18.013 Running I/O for 2 seconds... 00:39:18.013 00:39:18.013 Latency(us) 00:39:18.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:18.013 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:18.013 Malloc0 : 2.01 31858.83 31.11 0.00 0.00 8022.65 1563.93 12511.42 00:39:18.013 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:18.013 Malloc0 : 2.02 31869.44 31.12 0.00 0.00 8004.53 1474.56 11021.96 00:39:18.013 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:18.013 Malloc0 : 2.02 31848.45 31.10 0.00 0.00 7996.82 1504.35 9532.51 00:39:18.013 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:39:18.013 Malloc0 : 2.02 31827.75 31.08 0.00 0.00 7989.28 1459.67 9055.88 00:39:18.013 =================================================================================================================== 00:39:18.013 Total : 127404.48 124.42 0.00 0.00 8003.30 1459.67 12511.42' 00:39:18.013 11:50:52 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:39:18.013 11:50:52 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:18.013 00:39:18.013 11:50:52 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:39:18.013 11:50:52 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:39:18.013 11:50:52 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:39:18.013 11:50:52 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:18.013 11:50:52 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:39:18.013 11:50:52 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:18.014 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:18.014 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:18.014 11:50:52 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:22.206 11:50:56 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-13 11:50:52.744976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:22.206 [2024-07-13 11:50:52.745119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165984 ] 00:39:22.206 Using job config with 3 jobs 00:39:22.206 [2024-07-13 11:50:52.895395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.206 [2024-07-13 11:50:53.092678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.206 cpumask for '\''job0'\'' is too big 00:39:22.206 cpumask for '\''job1'\'' is too big 00:39:22.206 cpumask for '\''job2'\'' is too big 00:39:22.206 Running I/O for 2 seconds... 00:39:22.206 00:39:22.206 Latency(us) 00:39:22.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.206 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:22.206 Malloc0 : 2.01 43143.15 42.13 0.00 0.00 5928.11 1467.11 8817.57 00:39:22.206 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:22.206 Malloc0 : 2.01 43114.28 42.10 0.00 0.00 5921.53 1437.32 8340.95 00:39:22.206 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:22.206 Malloc0 : 2.01 43086.17 42.08 0.00 0.00 5915.54 1474.56 7983.48 00:39:22.206 =================================================================================================================== 00:39:22.206 Total : 129343.60 126.31 0.00 0.00 5921.73 1437.32 8817.57' 00:39:22.206 11:50:56 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-13 11:50:52.744976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:22.206 [2024-07-13 11:50:52.745119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165984 ] 00:39:22.206 Using job config with 3 jobs 00:39:22.206 [2024-07-13 11:50:52.895395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.206 [2024-07-13 11:50:53.092678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.206 cpumask for '\''job0'\'' is too big 00:39:22.206 cpumask for '\''job1'\'' is too big 00:39:22.206 cpumask for '\''job2'\'' is too big 00:39:22.206 Running I/O for 2 seconds... 00:39:22.206 00:39:22.206 Latency(us) 00:39:22.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.206 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:22.206 Malloc0 : 2.01 43143.15 42.13 0.00 0.00 5928.11 1467.11 8817.57 00:39:22.206 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:22.206 Malloc0 : 2.01 43114.28 42.10 0.00 0.00 5921.53 1437.32 8340.95 00:39:22.206 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:22.206 Malloc0 : 2.01 43086.17 42.08 0.00 0.00 5915.54 1474.56 7983.48 00:39:22.206 =================================================================================================================== 00:39:22.206 Total : 129343.60 126.31 0.00 0.00 5921.73 1437.32 8817.57' 00:39:22.206 11:50:56 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-13 11:50:52.744976] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:22.206 [2024-07-13 11:50:52.745119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165984 ] 00:39:22.206 Using job config with 3 jobs 00:39:22.206 [2024-07-13 11:50:52.895395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:22.206 [2024-07-13 11:50:53.092678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.206 cpumask for '\''job0'\'' is too big 00:39:22.206 cpumask for '\''job1'\'' is too big 00:39:22.206 cpumask for '\''job2'\'' is too big 00:39:22.206 Running I/O for 2 seconds... 00:39:22.206 00:39:22.207 Latency(us) 00:39:22.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:22.207 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:22.207 Malloc0 : 2.01 43143.15 42.13 0.00 0.00 5928.11 1467.11 8817.57 00:39:22.207 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:22.207 Malloc0 : 2.01 43114.28 42.10 0.00 0.00 5921.53 1437.32 8340.95 00:39:22.207 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:39:22.207 Malloc0 : 2.01 43086.17 42.08 0.00 0.00 5915.54 1474.56 7983.48 00:39:22.207 =================================================================================================================== 00:39:22.207 Total : 129343.60 126.31 0.00 0.00 5921.73 1437.32 8817.57' 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:39:22.207 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:22.207 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:22.207 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:22.207 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:39:22.207 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:39:22.207 11:50:56 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:26.512 11:51:00 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-13 11:50:56.876096] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:26.513 [2024-07-13 11:50:56.876319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166063 ] 00:39:26.513 Using job config with 4 jobs 00:39:26.513 [2024-07-13 11:50:57.048394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.513 [2024-07-13 11:50:57.276407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.513 cpumask for '\''job0'\'' is too big 00:39:26.513 cpumask for '\''job1'\'' is too big 00:39:26.513 cpumask for '\''job2'\'' is too big 00:39:26.513 cpumask for '\''job3'\'' is too big 00:39:26.513 Running I/O for 2 seconds... 00:39:26.513 00:39:26.513 Latency(us) 00:39:26.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.02 15859.88 15.49 0.00 0.00 16128.65 3157.64 25618.62 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.03 15865.04 15.49 0.00 0.00 16111.24 3693.85 25618.62 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.03 15854.81 15.48 0.00 0.00 16075.83 3038.49 22520.55 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.04 15843.54 15.47 0.00 0.00 16078.27 3559.80 22520.55 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.04 15833.26 15.46 0.00 0.00 16042.82 3142.75 19303.33 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.04 15822.77 15.45 0.00 0.00 16042.19 3604.48 19303.33 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.04 15812.44 15.44 0.00 0.00 16007.49 3127.85 17277.67 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.04 15801.87 15.43 0.00 0.00 16006.28 3619.37 17396.83 00:39:26.513 =================================================================================================================== 00:39:26.513 Total : 126693.62 123.72 0.00 0.00 16061.53 3038.49 25618.62' 00:39:26.513 11:51:00 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-13 11:50:56.876096] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:26.513 [2024-07-13 11:50:56.876319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166063 ] 00:39:26.513 Using job config with 4 jobs 00:39:26.513 [2024-07-13 11:50:57.048394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.513 [2024-07-13 11:50:57.276407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.513 cpumask for '\''job0'\'' is too big 00:39:26.513 cpumask for '\''job1'\'' is too big 00:39:26.513 cpumask for '\''job2'\'' is too big 00:39:26.513 cpumask for '\''job3'\'' is too big 00:39:26.513 Running I/O for 2 seconds... 00:39:26.513 00:39:26.513 Latency(us) 00:39:26.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.02 15859.88 15.49 0.00 0.00 16128.65 3157.64 25618.62 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.03 15865.04 15.49 0.00 0.00 16111.24 3693.85 25618.62 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.03 15854.81 15.48 0.00 0.00 16075.83 3038.49 22520.55 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.04 15843.54 15.47 0.00 0.00 16078.27 3559.80 22520.55 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.04 15833.26 15.46 0.00 0.00 16042.82 3142.75 19303.33 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.04 15822.77 15.45 0.00 0.00 16042.19 3604.48 19303.33 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.04 15812.44 15.44 0.00 0.00 16007.49 3127.85 17277.67 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.04 15801.87 15.43 0.00 0.00 16006.28 3619.37 17396.83 00:39:26.513 =================================================================================================================== 00:39:26.513 Total : 126693.62 123.72 0.00 0.00 16061.53 3038.49 25618.62' 00:39:26.513 11:51:00 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-13 11:50:56.876096] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:26.513 [2024-07-13 11:50:56.876319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166063 ] 00:39:26.513 Using job config with 4 jobs 00:39:26.513 [2024-07-13 11:50:57.048394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.513 [2024-07-13 11:50:57.276407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.513 cpumask for '\''job0'\'' is too big 00:39:26.513 cpumask for '\''job1'\'' is too big 00:39:26.513 cpumask for '\''job2'\'' is too big 00:39:26.513 cpumask for '\''job3'\'' is too big 00:39:26.513 Running I/O for 2 seconds... 00:39:26.513 00:39:26.513 Latency(us) 00:39:26.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.02 15859.88 15.49 0.00 0.00 16128.65 3157.64 25618.62 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.03 15865.04 15.49 0.00 0.00 16111.24 3693.85 25618.62 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.03 15854.81 15.48 0.00 0.00 16075.83 3038.49 22520.55 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.04 15843.54 15.47 0.00 0.00 16078.27 3559.80 22520.55 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.04 15833.26 15.46 0.00 0.00 16042.82 3142.75 19303.33 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.04 15822.77 15.45 0.00 0.00 16042.19 3604.48 19303.33 00:39:26.513 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc0 : 2.04 15812.44 15.44 0.00 0.00 16007.49 3127.85 17277.67 00:39:26.513 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:39:26.513 Malloc1 : 2.04 15801.87 15.43 0.00 0.00 16006.28 3619.37 17396.83 00:39:26.514 =================================================================================================================== 00:39:26.514 Total : 126693.62 123.72 0.00 0.00 16061.53 3038.49 25618.62' 00:39:26.514 11:51:00 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:39:26.514 11:51:00 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:39:26.514 11:51:00 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:39:26.514 11:51:00 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:39:26.514 11:51:00 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:39:26.514 11:51:01 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:39:26.514 ************************************ 00:39:26.514 END TEST bdevperf_config 00:39:26.514 ************************************ 00:39:26.514 00:39:26.514 real 0m16.772s 00:39:26.514 user 0m14.782s 00:39:26.514 sys 0m1.407s 00:39:26.514 11:51:01 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:26.514 11:51:01 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:39:26.514 11:51:01 -- common/autotest_common.sh@1142 -- # return 0 00:39:26.514 11:51:01 -- spdk/autotest.sh@192 -- # uname -s 00:39:26.514 11:51:01 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:39:26.514 11:51:01 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:39:26.514 11:51:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:26.514 11:51:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:26.514 11:51:01 -- common/autotest_common.sh@10 -- # set +x 00:39:26.514 ************************************ 00:39:26.514 START TEST reactor_set_interrupt 00:39:26.514 ************************************ 00:39:26.514 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:39:26.514 * Looking for test storage... 00:39:26.514 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:26.514 11:51:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:39:26.514 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:39:26.514 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:26.514 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:26.514 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:39:26.514 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:26.514 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:39:26.514 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:39:26.514 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:39:26.514 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:39:26.514 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:39:26.514 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:39:26.514 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:39:26.514 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:39:26.514 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_CET=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES=128 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_DPDK_UADK=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_ASAN=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_SHARED=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_VTUNE_DIR= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_RDMA_SET_TOS=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_VBDEV_COMPRESS=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VFIO_USER_DIR= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_PGO_DIR= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_FUZZER_LIB= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_HAVE_EXECINFO_H=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_USDT=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_URING_ZNS=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_FC_PATH= 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_COVERAGE=y 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_CUSTOMOCF=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_DPDK_PKG_CONFIG=n 00:39:26.514 11:51:01 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_DEBUG=y 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_RDMA=y 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_HAVE_ARC4RANDOM=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_FUZZER=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_FC=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBARCHIVE=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_DPDK_COMPRESSDEV=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_CROSS_PREFIX= 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_PREFIX=/usr/local 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_LIBBSD=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_UBSAN=y 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_PGO_CAPTURE=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_UBLK=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_ISAL_CRYPTO=y 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_CRYPTO=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_RBD=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_LIBDIR= 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_IPSEC_MB_DIR= 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_PGO_USE=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_GOLANG=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_VHOST=y 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_IDXD=y 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_AVAHI=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:39:26.515 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:39:26.515 #define SPDK_CONFIG_H 00:39:26.515 #define SPDK_CONFIG_APPS 1 00:39:26.515 #define SPDK_CONFIG_ARCH native 00:39:26.515 #define SPDK_CONFIG_ASAN 1 00:39:26.515 #undef SPDK_CONFIG_AVAHI 00:39:26.515 #undef SPDK_CONFIG_CET 00:39:26.515 #define SPDK_CONFIG_COVERAGE 1 00:39:26.515 #define SPDK_CONFIG_CROSS_PREFIX 00:39:26.515 #undef SPDK_CONFIG_CRYPTO 00:39:26.515 #undef SPDK_CONFIG_CRYPTO_MLX5 00:39:26.515 #undef SPDK_CONFIG_CUSTOMOCF 00:39:26.515 #undef SPDK_CONFIG_DAOS 00:39:26.515 #define SPDK_CONFIG_DAOS_DIR 00:39:26.515 #define SPDK_CONFIG_DEBUG 1 00:39:26.515 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:39:26.515 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:39:26.515 #define SPDK_CONFIG_DPDK_INC_DIR 00:39:26.515 #define SPDK_CONFIG_DPDK_LIB_DIR 00:39:26.515 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:39:26.515 #undef SPDK_CONFIG_DPDK_UADK 00:39:26.515 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:39:26.515 #define SPDK_CONFIG_EXAMPLES 1 00:39:26.515 #undef SPDK_CONFIG_FC 00:39:26.515 #define SPDK_CONFIG_FC_PATH 00:39:26.515 #define SPDK_CONFIG_FIO_PLUGIN 1 00:39:26.515 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:39:26.515 #undef SPDK_CONFIG_FUSE 00:39:26.515 #undef SPDK_CONFIG_FUZZER 00:39:26.515 #define SPDK_CONFIG_FUZZER_LIB 00:39:26.515 #undef SPDK_CONFIG_GOLANG 00:39:26.515 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:39:26.515 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:39:26.515 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:39:26.515 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:39:26.515 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:39:26.515 #undef SPDK_CONFIG_HAVE_LIBBSD 00:39:26.515 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:39:26.515 #define SPDK_CONFIG_IDXD 1 00:39:26.515 #undef SPDK_CONFIG_IDXD_KERNEL 00:39:26.515 #undef SPDK_CONFIG_IPSEC_MB 00:39:26.515 #define SPDK_CONFIG_IPSEC_MB_DIR 00:39:26.515 #define SPDK_CONFIG_ISAL 1 00:39:26.515 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:39:26.515 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:39:26.515 #define SPDK_CONFIG_LIBDIR 00:39:26.515 #undef SPDK_CONFIG_LTO 00:39:26.515 #define SPDK_CONFIG_MAX_LCORES 128 00:39:26.515 #define SPDK_CONFIG_NVME_CUSE 1 00:39:26.515 #undef SPDK_CONFIG_OCF 00:39:26.515 #define SPDK_CONFIG_OCF_PATH 00:39:26.515 #define SPDK_CONFIG_OPENSSL_PATH 00:39:26.515 #undef SPDK_CONFIG_PGO_CAPTURE 00:39:26.515 #define SPDK_CONFIG_PGO_DIR 00:39:26.515 #undef SPDK_CONFIG_PGO_USE 00:39:26.515 #define SPDK_CONFIG_PREFIX /usr/local 00:39:26.515 #define SPDK_CONFIG_RAID5F 1 00:39:26.515 #undef SPDK_CONFIG_RBD 00:39:26.515 #define SPDK_CONFIG_RDMA 1 00:39:26.515 #define SPDK_CONFIG_RDMA_PROV verbs 00:39:26.515 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:39:26.515 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:39:26.515 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:39:26.515 #undef SPDK_CONFIG_SHARED 00:39:26.515 #undef SPDK_CONFIG_SMA 00:39:26.515 #define SPDK_CONFIG_TESTS 1 00:39:26.515 #undef SPDK_CONFIG_TSAN 00:39:26.515 #undef SPDK_CONFIG_UBLK 00:39:26.515 #define SPDK_CONFIG_UBSAN 1 00:39:26.515 #define SPDK_CONFIG_UNIT_TESTS 1 00:39:26.515 #undef SPDK_CONFIG_URING 00:39:26.515 #define SPDK_CONFIG_URING_PATH 00:39:26.515 #undef SPDK_CONFIG_URING_ZNS 00:39:26.515 #undef SPDK_CONFIG_USDT 00:39:26.515 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:39:26.515 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:39:26.515 #undef SPDK_CONFIG_VFIO_USER 00:39:26.515 #define SPDK_CONFIG_VFIO_USER_DIR 00:39:26.515 #define SPDK_CONFIG_VHOST 1 00:39:26.515 #define SPDK_CONFIG_VIRTIO 1 00:39:26.515 #undef SPDK_CONFIG_VTUNE 00:39:26.515 #define SPDK_CONFIG_VTUNE_DIR 00:39:26.515 #define SPDK_CONFIG_WERROR 1 00:39:26.515 #define SPDK_CONFIG_WPDK_DIR 00:39:26.515 #undef SPDK_CONFIG_XNVME 00:39:26.515 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:39:26.515 11:51:01 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:39:26.515 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:26.515 11:51:01 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:26.515 11:51:01 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:26.515 11:51:01 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:26.516 11:51:01 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:26.516 11:51:01 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:26.516 11:51:01 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:26.516 11:51:01 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:39:26.516 11:51:01 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:39:26.516 11:51:01 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:39:26.516 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 1 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:39:26.517 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 166151 ]] 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 166151 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.ez25gX 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:39:26.518 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.ez25gX/tests/interrupt /tmp/spdk.ez25gX 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=udev 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6224461824 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6224461824 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1249763328 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254514688 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4751360 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=10311454720 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=10288562176 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6267850752 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6272561152 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop1 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=96337920 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=96337920 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop0 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=103089152 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=109422592 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop2 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=41025536 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=41025536 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1254510592 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254510592 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=98658201600 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:39:26.779 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=1044578304 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop3 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=40763392 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=40763392 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop4 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:39:26.780 * Looking for test storage... 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=10311454720 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=12503154688 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:26.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=166194 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:26.780 11:51:01 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 166194 /var/tmp/spdk.sock 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 166194 ']' 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:26.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:26.780 11:51:01 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:26.780 [2024-07-13 11:51:01.358253] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:26.780 [2024-07-13 11:51:01.358628] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166194 ] 00:39:26.780 [2024-07-13 11:51:01.523579] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:27.040 [2024-07-13 11:51:01.717561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:27.040 [2024-07-13 11:51:01.717702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:27.040 [2024-07-13 11:51:01.717699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:27.299 [2024-07-13 11:51:01.996465] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:27.866 11:51:02 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:27.866 11:51:02 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:39:27.866 11:51:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:39:27.866 11:51:02 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:28.130 Malloc0 00:39:28.130 Malloc1 00:39:28.130 Malloc2 00:39:28.130 11:51:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:39:28.130 11:51:02 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:39:28.130 11:51:02 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:28.130 11:51:02 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:39:28.130 5000+0 records in 00:39:28.130 5000+0 records out 00:39:28.130 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0126721 s, 808 MB/s 00:39:28.130 11:51:02 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:39:28.387 AIO0 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 166194 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 166194 without_thd 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=166194 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:39:28.387 11:51:02 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:28.645 11:51:03 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:39:28.645 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:39:28.646 spdk_thread ids are 1 on reactor0. 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166194 0 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166194 0 idle 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166194 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166194 -w 256 00:39:28.646 11:51:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166194 root 20 0 20.1t 151292 31516 S 0.0 1.2 0:00.71 reactor_0' 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166194 root 20 0 20.1t 151292 31516 S 0.0 1.2 0:00.71 reactor_0 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166194 1 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166194 1 idle 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166194 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166194 -w 256 00:39:28.904 11:51:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166206 root 20 0 20.1t 151292 31516 S 0.0 1.2 0:00.00 reactor_1' 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166206 root 20 0 20.1t 151292 31516 S 0.0 1.2 0:00.00 reactor_1 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166194 2 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166194 2 idle 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166194 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166194 -w 256 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166207 root 20 0 20.1t 151292 31516 S 0.0 1.2 0:00.00 reactor_2' 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166207 root 20 0 20.1t 151292 31516 S 0.0 1.2 0:00.00 reactor_2 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:39:29.163 11:51:03 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:39:29.422 [2024-07-13 11:51:04.151359] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:29.422 11:51:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:39:29.680 [2024-07-13 11:51:04.399076] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:39:29.680 [2024-07-13 11:51:04.399725] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:29.680 11:51:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:39:29.939 [2024-07-13 11:51:04.654954] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:39:29.939 [2024-07-13 11:51:04.655456] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166194 0 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166194 0 busy 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166194 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166194 -w 256 00:39:29.939 11:51:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166194 root 20 0 20.1t 151404 31516 R 99.9 1.2 0:01.15 reactor_0' 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166194 root 20 0 20.1t 151404 31516 R 99.9 1.2 0:01.15 reactor_0 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166194 2 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166194 2 busy 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166194 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166194 -w 256 00:39:30.198 11:51:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:39:30.457 11:51:04 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166207 root 20 0 20.1t 151404 31516 R 99.9 1.2 0:00.32 reactor_2' 00:39:30.457 11:51:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166207 root 20 0 20.1t 151404 31516 R 99.9 1.2 0:00.32 reactor_2 00:39:30.457 11:51:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:30.457 11:51:04 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:30.457 11:51:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:39:30.457 11:51:05 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:39:30.457 11:51:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:39:30.457 11:51:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:39:30.457 11:51:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:39:30.457 11:51:05 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:30.457 11:51:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:39:30.720 [2024-07-13 11:51:05.234963] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:39:30.720 [2024-07-13 11:51:05.235427] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:30.720 11:51:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:39:30.720 11:51:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 166194 2 00:39:30.720 11:51:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166194 2 idle 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166194 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166194 -w 256 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166207 root 20 0 20.1t 151468 31516 S 0.0 1.2 0:00.56 reactor_2' 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166207 root 20 0 20.1t 151468 31516 S 0.0 1.2 0:00.56 reactor_2 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:30.721 11:51:05 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:30.722 11:51:05 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:30.722 11:51:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:30.722 11:51:05 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:30.722 11:51:05 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:30.722 11:51:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:39:30.980 [2024-07-13 11:51:05.678966] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:39:30.980 [2024-07-13 11:51:05.679590] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:30.980 11:51:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:39:30.980 11:51:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:39:30.980 11:51:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:39:31.238 [2024-07-13 11:51:05.871305] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 166194 0 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166194 0 idle 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166194 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166194 -w 256 00:39:31.238 11:51:05 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166194 root 20 0 20.1t 151556 31516 S 0.0 1.2 0:02.00 reactor_0' 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166194 root 20 0 20.1t 151556 31516 S 0.0 1.2 0:02.00 reactor_0 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:39:31.497 11:51:06 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 166194 00:39:31.497 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 166194 ']' 00:39:31.497 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 166194 00:39:31.497 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:39:31.497 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:31.497 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 166194 00:39:31.497 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:31.498 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:31.498 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 166194' 00:39:31.498 killing process with pid 166194 00:39:31.498 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 166194 00:39:31.498 11:51:06 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 166194 00:39:32.874 11:51:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:39:32.874 11:51:07 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:39:32.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:32.874 11:51:07 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:39:32.874 11:51:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.874 11:51:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:39:32.874 11:51:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=166360 00:39:32.874 11:51:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:32.874 11:51:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:39:32.874 11:51:07 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 166360 /var/tmp/spdk.sock 00:39:32.874 11:51:07 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 166360 ']' 00:39:32.874 11:51:07 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:32.874 11:51:07 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:32.874 11:51:07 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:32.874 11:51:07 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:32.874 11:51:07 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:32.874 [2024-07-13 11:51:07.364877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:32.874 [2024-07-13 11:51:07.365311] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166360 ] 00:39:32.874 [2024-07-13 11:51:07.529006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:33.132 [2024-07-13 11:51:07.712283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:33.132 [2024-07-13 11:51:07.712413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:33.132 [2024-07-13 11:51:07.712412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:33.390 [2024-07-13 11:51:07.995005] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:33.647 11:51:08 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:33.647 11:51:08 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:39:33.647 11:51:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:39:33.647 11:51:08 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:33.905 Malloc0 00:39:33.906 Malloc1 00:39:33.906 Malloc2 00:39:33.906 11:51:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:39:33.906 11:51:08 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:39:33.906 11:51:08 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:33.906 11:51:08 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:39:33.906 5000+0 records in 00:39:33.906 5000+0 records out 00:39:33.906 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0188496 s, 543 MB/s 00:39:33.906 11:51:08 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:39:34.165 AIO0 00:39:34.165 11:51:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 166360 00:39:34.165 11:51:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 166360 00:39:34.165 11:51:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=166360 00:39:34.165 11:51:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:39:34.165 11:51:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:39:34.165 11:51:08 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:39:34.165 11:51:08 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:39:34.424 11:51:08 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:39:34.424 11:51:08 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:39:34.424 11:51:08 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:34.424 11:51:08 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:39:34.424 11:51:08 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:34.424 11:51:09 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:39:34.424 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:39:34.424 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:39:34.424 11:51:09 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:39:34.424 11:51:09 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:39:34.424 11:51:09 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:39:34.424 11:51:09 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:34.424 11:51:09 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:39:34.424 11:51:09 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:39:34.683 spdk_thread ids are 1 on reactor0. 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166360 0 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166360 0 idle 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166360 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166360 -w 256 00:39:34.683 11:51:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166360 root 20 0 20.1t 151552 31768 S 0.0 1.2 0:00.69 reactor_0' 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166360 root 20 0 20.1t 151552 31768 S 0.0 1.2 0:00.69 reactor_0 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166360 1 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166360 1 idle 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166360 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166360 -w 256 00:39:34.942 11:51:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166370 root 20 0 20.1t 151552 31768 S 0.0 1.2 0:00.00 reactor_1' 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166370 root 20 0 20.1t 151552 31768 S 0.0 1.2 0:00.00 reactor_1 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 166360 2 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166360 2 idle 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166360 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166360 -w 256 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166371 root 20 0 20.1t 151552 31768 S 0.0 1.2 0:00.00 reactor_2' 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166371 root 20 0 20.1t 151552 31768 S 0.0 1.2 0:00.00 reactor_2 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:39:35.201 11:51:09 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:39:35.768 [2024-07-13 11:51:10.217831] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:39:35.768 [2024-07-13 11:51:10.218037] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:39:35.768 [2024-07-13 11:51:10.218368] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:39:35.768 [2024-07-13 11:51:10.417603] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:39:35.768 [2024-07-13 11:51:10.418104] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166360 0 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166360 0 busy 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166360 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166360 -w 256 00:39:35.768 11:51:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166360 root 20 0 20.1t 151676 31768 R 99.9 1.2 0:01.07 reactor_0' 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166360 root 20 0 20.1t 151676 31768 R 99.9 1.2 0:01.07 reactor_0 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 166360 2 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 166360 2 busy 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166360 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166360 -w 256 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166371 root 20 0 20.1t 151676 31768 R 99.9 1.2 0:00.34 reactor_2' 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166371 root 20 0 20.1t 151676 31768 R 99.9 1.2 0:00.34 reactor_2 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:36.026 11:51:10 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:39:36.284 [2024-07-13 11:51:11.022011] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:39:36.284 [2024-07-13 11:51:11.022395] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 166360 2 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166360 2 idle 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166360 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166360 -w 256 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166371 root 20 0 20.1t 151720 31768 S 0.0 1.2 0:00.60 reactor_2' 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166371 root 20 0 20.1t 151720 31768 S 0.0 1.2 0:00.60 reactor_2 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:36.542 11:51:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:39:36.800 [2024-07-13 11:51:11.430043] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:39:36.800 [2024-07-13 11:51:11.430477] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:39:36.800 [2024-07-13 11:51:11.430655] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:39:36.800 11:51:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:39:36.800 11:51:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 166360 0 00:39:36.800 11:51:11 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 166360 0 idle 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=166360 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 166360 -w 256 00:39:36.801 11:51:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 166360 root 20 0 20.1t 151748 31768 S 0.0 1.2 0:01.91 reactor_0' 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 166360 root 20 0 20.1t 151748 31768 S 0.0 1.2 0:01.91 reactor_0 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:39:37.059 11:51:11 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 166360 00:39:37.059 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 166360 ']' 00:39:37.059 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 166360 00:39:37.059 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:39:37.059 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:37.059 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 166360 00:39:37.059 killing process with pid 166360 00:39:37.060 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:37.060 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:37.060 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 166360' 00:39:37.060 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 166360 00:39:37.060 11:51:11 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 166360 00:39:38.439 11:51:12 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:39:38.439 11:51:12 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:39:38.439 ************************************ 00:39:38.439 END TEST reactor_set_interrupt 00:39:38.439 ************************************ 00:39:38.439 00:39:38.439 real 0m11.799s 00:39:38.439 user 0m12.116s 00:39:38.439 sys 0m1.600s 00:39:38.439 11:51:12 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:38.439 11:51:12 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:39:38.439 11:51:12 -- common/autotest_common.sh@1142 -- # return 0 00:39:38.439 11:51:12 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:39:38.439 11:51:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:38.439 11:51:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:38.439 11:51:12 -- common/autotest_common.sh@10 -- # set +x 00:39:38.439 ************************************ 00:39:38.439 START TEST reap_unregistered_poller 00:39:38.439 ************************************ 00:39:38.439 11:51:12 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:39:38.439 * Looking for test storage... 00:39:38.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:38.439 11:51:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:39:38.439 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:39:38.439 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:38.439 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:38.439 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:39:38.439 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:38.439 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_CET=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES=128 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_HAVE_EVP_MAC=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_IPSEC_MB=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_DPDK_UADK=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_ASAN=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_SHARED=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_VTUNE_DIR= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_RDMA_SET_TOS=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_VBDEV_COMPRESS=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VFIO_USER_DIR= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_PGO_DIR= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_FUZZER_LIB= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_HAVE_EXECINFO_H=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_USDT=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_HAVE_KEYUTILS=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_URING_ZNS=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_FC_PATH= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_COVERAGE=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_CUSTOMOCF=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_DPDK_PKG_CONFIG=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_DEBUG=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_RDMA=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_HAVE_ARC4RANDOM=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_FUZZER=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_FC=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBARCHIVE=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_DPDK_COMPRESSDEV=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_CROSS_PREFIX= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_PREFIX=/usr/local 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_LIBBSD=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_UBSAN=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_PGO_CAPTURE=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_UBLK=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_ISAL_CRYPTO=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_CRYPTO=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_RBD=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_LIBDIR= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_IPSEC_MB_DIR= 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_PGO_USE=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_GOLANG=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_VHOST=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_IDXD=y 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_AVAHI=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:39:38.439 #define SPDK_CONFIG_H 00:39:38.439 #define SPDK_CONFIG_APPS 1 00:39:38.439 #define SPDK_CONFIG_ARCH native 00:39:38.439 #define SPDK_CONFIG_ASAN 1 00:39:38.439 #undef SPDK_CONFIG_AVAHI 00:39:38.439 #undef SPDK_CONFIG_CET 00:39:38.439 #define SPDK_CONFIG_COVERAGE 1 00:39:38.439 #define SPDK_CONFIG_CROSS_PREFIX 00:39:38.439 #undef SPDK_CONFIG_CRYPTO 00:39:38.439 #undef SPDK_CONFIG_CRYPTO_MLX5 00:39:38.439 #undef SPDK_CONFIG_CUSTOMOCF 00:39:38.439 #undef SPDK_CONFIG_DAOS 00:39:38.439 #define SPDK_CONFIG_DAOS_DIR 00:39:38.439 #define SPDK_CONFIG_DEBUG 1 00:39:38.439 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:39:38.439 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:39:38.439 #define SPDK_CONFIG_DPDK_INC_DIR 00:39:38.439 #define SPDK_CONFIG_DPDK_LIB_DIR 00:39:38.439 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:39:38.439 #undef SPDK_CONFIG_DPDK_UADK 00:39:38.439 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:39:38.439 #define SPDK_CONFIG_EXAMPLES 1 00:39:38.439 #undef SPDK_CONFIG_FC 00:39:38.439 #define SPDK_CONFIG_FC_PATH 00:39:38.439 #define SPDK_CONFIG_FIO_PLUGIN 1 00:39:38.439 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:39:38.439 #undef SPDK_CONFIG_FUSE 00:39:38.439 #undef SPDK_CONFIG_FUZZER 00:39:38.439 #define SPDK_CONFIG_FUZZER_LIB 00:39:38.439 #undef SPDK_CONFIG_GOLANG 00:39:38.439 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:39:38.439 #undef SPDK_CONFIG_HAVE_EVP_MAC 00:39:38.439 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:39:38.439 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:39:38.439 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:39:38.439 #undef SPDK_CONFIG_HAVE_LIBBSD 00:39:38.439 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:39:38.439 #define SPDK_CONFIG_IDXD 1 00:39:38.439 #undef SPDK_CONFIG_IDXD_KERNEL 00:39:38.439 #undef SPDK_CONFIG_IPSEC_MB 00:39:38.439 #define SPDK_CONFIG_IPSEC_MB_DIR 00:39:38.439 #define SPDK_CONFIG_ISAL 1 00:39:38.439 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:39:38.439 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:39:38.439 #define SPDK_CONFIG_LIBDIR 00:39:38.439 #undef SPDK_CONFIG_LTO 00:39:38.439 #define SPDK_CONFIG_MAX_LCORES 128 00:39:38.439 #define SPDK_CONFIG_NVME_CUSE 1 00:39:38.439 #undef SPDK_CONFIG_OCF 00:39:38.439 #define SPDK_CONFIG_OCF_PATH 00:39:38.439 #define SPDK_CONFIG_OPENSSL_PATH 00:39:38.439 #undef SPDK_CONFIG_PGO_CAPTURE 00:39:38.439 #define SPDK_CONFIG_PGO_DIR 00:39:38.439 #undef SPDK_CONFIG_PGO_USE 00:39:38.439 #define SPDK_CONFIG_PREFIX /usr/local 00:39:38.439 #define SPDK_CONFIG_RAID5F 1 00:39:38.439 #undef SPDK_CONFIG_RBD 00:39:38.439 #define SPDK_CONFIG_RDMA 1 00:39:38.439 #define SPDK_CONFIG_RDMA_PROV verbs 00:39:38.439 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:39:38.439 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:39:38.439 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:39:38.439 #undef SPDK_CONFIG_SHARED 00:39:38.439 #undef SPDK_CONFIG_SMA 00:39:38.439 #define SPDK_CONFIG_TESTS 1 00:39:38.439 #undef SPDK_CONFIG_TSAN 00:39:38.439 #undef SPDK_CONFIG_UBLK 00:39:38.439 #define SPDK_CONFIG_UBSAN 1 00:39:38.439 #define SPDK_CONFIG_UNIT_TESTS 1 00:39:38.439 #undef SPDK_CONFIG_URING 00:39:38.439 #define SPDK_CONFIG_URING_PATH 00:39:38.439 #undef SPDK_CONFIG_URING_ZNS 00:39:38.439 #undef SPDK_CONFIG_USDT 00:39:38.439 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:39:38.439 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:39:38.439 #undef SPDK_CONFIG_VFIO_USER 00:39:38.439 #define SPDK_CONFIG_VFIO_USER_DIR 00:39:38.439 #define SPDK_CONFIG_VHOST 1 00:39:38.439 #define SPDK_CONFIG_VIRTIO 1 00:39:38.439 #undef SPDK_CONFIG_VTUNE 00:39:38.439 #define SPDK_CONFIG_VTUNE_DIR 00:39:38.439 #define SPDK_CONFIG_WERROR 1 00:39:38.439 #define SPDK_CONFIG_WPDK_DIR 00:39:38.439 #undef SPDK_CONFIG_XNVME 00:39:38.439 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:39:38.439 11:51:13 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:38.439 11:51:13 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:38.439 11:51:13 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:38.439 11:51:13 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:38.439 11:51:13 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:38.439 11:51:13 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:38.439 11:51:13 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:38.439 11:51:13 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:39:38.439 11:51:13 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:38.439 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:39:38.439 11:51:13 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:39:38.440 11:51:13 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:39:38.440 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 166537 ]] 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 166537 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Zca5gq 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.Zca5gq/tests/interrupt /tmp/spdk.Zca5gq 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=udev 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6224461824 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6224461824 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1249763328 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254514688 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4751360 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=10311417856 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=20616794112 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=10288599040 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6267850752 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=5242880 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=5242880 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6272561152 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6272561152 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop1 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=96337920 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=96337920 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop0 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda15 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=103089152 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=109422592 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=6334464 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop2 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=41025536 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=41025536 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1254510592 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254510592 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=98658062336 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=1044717568 00:39:38.441 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop3 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=40763392 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=40763392 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/loop4 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=squashfs 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=0 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=67108864 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:39:38.442 * Looking for test storage... 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=10311417856 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == tmpfs ]] 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ ext4 == ramfs ]] 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=12503191552 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:38.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=166580 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:39:38.442 11:51:13 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 166580 /var/tmp/spdk.sock 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 166580 ']' 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:38.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:39:38.442 11:51:13 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:39:38.701 [2024-07-13 11:51:13.224883] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:38.701 [2024-07-13 11:51:13.225904] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166580 ] 00:39:38.701 [2024-07-13 11:51:13.402574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:38.959 [2024-07-13 11:51:13.604558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.959 [2024-07-13 11:51:13.604659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:38.959 [2024-07-13 11:51:13.604882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.216 [2024-07-13 11:51:13.889290] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:39.474 11:51:14 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:39:39.474 11:51:14 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:39:39.474 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:39:39.474 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:39:39.474 11:51:14 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:39.474 11:51:14 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:39:39.474 11:51:14 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:39.733 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:39:39.734 "name": "app_thread", 00:39:39.734 "id": 1, 00:39:39.734 "active_pollers": [], 00:39:39.734 "timed_pollers": [ 00:39:39.734 { 00:39:39.734 "name": "rpc_subsystem_poll_servers", 00:39:39.734 "id": 1, 00:39:39.734 "state": "waiting", 00:39:39.734 "run_count": 0, 00:39:39.734 "busy_count": 0, 00:39:39.734 "period_ticks": 8800000 00:39:39.734 } 00:39:39.734 ], 00:39:39.734 "paused_pollers": [] 00:39:39.734 }' 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:39:39.734 5000+0 records in 00:39:39.734 5000+0 records out 00:39:39.734 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0149418 s, 685 MB/s 00:39:39.734 11:51:14 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:39:39.993 AIO0 00:39:39.993 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:40.251 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:39:40.251 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:39:40.251 11:51:14 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:39:40.251 11:51:14 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:40.251 11:51:14 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:39:40.251 11:51:14 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:40.511 11:51:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:39:40.511 "name": "app_thread", 00:39:40.511 "id": 1, 00:39:40.511 "active_pollers": [], 00:39:40.511 "timed_pollers": [ 00:39:40.511 { 00:39:40.511 "name": "rpc_subsystem_poll_servers", 00:39:40.511 "id": 1, 00:39:40.511 "state": "waiting", 00:39:40.511 "run_count": 0, 00:39:40.511 "busy_count": 0, 00:39:40.511 "period_ticks": 8800000 00:39:40.511 } 00:39:40.511 ], 00:39:40.511 "paused_pollers": [] 00:39:40.511 }' 00:39:40.511 11:51:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:39:40.511 11:51:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:39:40.511 11:51:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:39:40.511 11:51:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:39:40.511 11:51:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:39:40.511 11:51:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:39:40.511 11:51:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:39:40.511 11:51:15 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 166580 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 166580 ']' 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 166580 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 166580 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 166580' 00:39:40.511 killing process with pid 166580 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 166580 00:39:40.511 11:51:15 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 166580 00:39:41.890 11:51:16 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:39:41.890 11:51:16 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:39:41.890 ************************************ 00:39:41.890 END TEST reap_unregistered_poller 00:39:41.890 ************************************ 00:39:41.890 00:39:41.890 real 0m3.330s 00:39:41.890 user 0m2.807s 00:39:41.890 sys 0m0.485s 00:39:41.890 11:51:16 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:41.890 11:51:16 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:39:41.890 11:51:16 -- common/autotest_common.sh@1142 -- # return 0 00:39:41.890 11:51:16 -- spdk/autotest.sh@198 -- # uname -s 00:39:41.890 11:51:16 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:39:41.890 11:51:16 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:39:41.890 11:51:16 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:39:41.890 11:51:16 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:39:41.890 11:51:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:39:41.890 11:51:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:41.890 11:51:16 -- common/autotest_common.sh@10 -- # set +x 00:39:41.890 ************************************ 00:39:41.890 START TEST spdk_dd 00:39:41.890 ************************************ 00:39:41.890 11:51:16 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:39:41.890 * Looking for test storage... 00:39:41.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:41.890 11:51:16 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:41.890 11:51:16 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:41.890 11:51:16 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:41.890 11:51:16 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:41.890 11:51:16 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:41.890 11:51:16 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:41.890 11:51:16 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:41.890 11:51:16 spdk_dd -- paths/export.sh@5 -- # export PATH 00:39:41.890 11:51:16 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:41.890 11:51:16 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:42.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:39:42.149 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:39:43.528 11:51:18 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:39:43.528 11:51:18 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:39:43.528 11:51:18 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:39:43.528 11:51:18 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:39:43.528 11:51:18 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:39:43.528 11:51:18 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:39:43.528 11:51:18 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:39:43.528 11:51:18 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:39:43.528 11:51:18 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@230 -- # local class 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@232 -- # local progif 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@233 -- # class=01 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@15 -- # local i 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@24 -- # return 0 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:39:43.529 11:51:18 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:39:43.529 11:51:18 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@139 -- # local lib so 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:39:43.529 11:51:18 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:39:43.529 11:51:18 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:39:43.529 11:51:18 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:39:43.529 11:51:18 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:43.529 11:51:18 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:43.529 11:51:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:43.529 ************************************ 00:39:43.529 START TEST spdk_dd_basic_rw 00:39:43.529 ************************************ 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:39:43.529 * Looking for test storage... 00:39:43.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:39:43.529 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:39:44.101 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 119 Data Units Written: 7 Host Read Commands: 2490 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:39:44.101 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 119 Data Units Written: 7 Host Read Commands: 2490 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:39:44.102 ************************************ 00:39:44.102 START TEST dd_bs_lt_native_bs 00:39:44.102 ************************************ 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:44.102 11:51:18 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:39:44.102 { 00:39:44.102 "subsystems": [ 00:39:44.102 { 00:39:44.102 "subsystem": "bdev", 00:39:44.102 "config": [ 00:39:44.102 { 00:39:44.102 "params": { 00:39:44.102 "trtype": "pcie", 00:39:44.102 "traddr": "0000:00:10.0", 00:39:44.102 "name": "Nvme0" 00:39:44.102 }, 00:39:44.102 "method": "bdev_nvme_attach_controller" 00:39:44.102 }, 00:39:44.102 { 00:39:44.102 "method": "bdev_wait_for_examine" 00:39:44.102 } 00:39:44.102 ] 00:39:44.102 } 00:39:44.102 ] 00:39:44.102 } 00:39:44.102 [2024-07-13 11:51:18.637021] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:44.102 [2024-07-13 11:51:18.637228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166912 ] 00:39:44.102 [2024-07-13 11:51:18.809110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:44.361 [2024-07-13 11:51:19.065022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:44.930 [2024-07-13 11:51:19.425703] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:39:44.930 [2024-07-13 11:51:19.425825] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:45.497 [2024-07-13 11:51:20.065928] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:45.756 ************************************ 00:39:45.756 END TEST dd_bs_lt_native_bs 00:39:45.756 ************************************ 00:39:45.756 00:39:45.756 real 0m1.875s 00:39:45.756 user 0m1.554s 00:39:45.756 sys 0m0.287s 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:39:45.756 ************************************ 00:39:45.756 START TEST dd_rw 00:39:45.756 ************************************ 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:39:45.756 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:46.323 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:39:46.323 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:39:46.323 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:46.323 11:51:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:46.323 { 00:39:46.324 "subsystems": [ 00:39:46.324 { 00:39:46.324 "subsystem": "bdev", 00:39:46.324 "config": [ 00:39:46.324 { 00:39:46.324 "params": { 00:39:46.324 "trtype": "pcie", 00:39:46.324 "traddr": "0000:00:10.0", 00:39:46.324 "name": "Nvme0" 00:39:46.324 }, 00:39:46.324 "method": "bdev_nvme_attach_controller" 00:39:46.324 }, 00:39:46.324 { 00:39:46.324 "method": "bdev_wait_for_examine" 00:39:46.324 } 00:39:46.324 ] 00:39:46.324 } 00:39:46.324 ] 00:39:46.324 } 00:39:46.324 [2024-07-13 11:51:21.038045] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:46.324 [2024-07-13 11:51:21.038273] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166966 ] 00:39:46.583 [2024-07-13 11:51:21.212299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:46.842 [2024-07-13 11:51:21.423582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:48.036  Copying: 60/60 [kB] (average 29 MBps) 00:39:48.036 00:39:48.036 11:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:39:48.036 11:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:39:48.036 11:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:48.036 11:51:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:48.294 { 00:39:48.294 "subsystems": [ 00:39:48.294 { 00:39:48.294 "subsystem": "bdev", 00:39:48.294 "config": [ 00:39:48.294 { 00:39:48.294 "params": { 00:39:48.294 "trtype": "pcie", 00:39:48.294 "traddr": "0000:00:10.0", 00:39:48.294 "name": "Nvme0" 00:39:48.294 }, 00:39:48.294 "method": "bdev_nvme_attach_controller" 00:39:48.294 }, 00:39:48.294 { 00:39:48.294 "method": "bdev_wait_for_examine" 00:39:48.294 } 00:39:48.294 ] 00:39:48.294 } 00:39:48.294 ] 00:39:48.294 } 00:39:48.294 [2024-07-13 11:51:22.825238] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:48.294 [2024-07-13 11:51:22.825484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166992 ] 00:39:48.294 [2024-07-13 11:51:22.996180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.551 [2024-07-13 11:51:23.198068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:50.184  Copying: 60/60 [kB] (average 19 MBps) 00:39:50.184 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:50.184 11:51:24 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:50.184 { 00:39:50.184 "subsystems": [ 00:39:50.184 { 00:39:50.184 "subsystem": "bdev", 00:39:50.184 "config": [ 00:39:50.184 { 00:39:50.184 "params": { 00:39:50.184 "trtype": "pcie", 00:39:50.184 "traddr": "0000:00:10.0", 00:39:50.184 "name": "Nvme0" 00:39:50.184 }, 00:39:50.184 "method": "bdev_nvme_attach_controller" 00:39:50.184 }, 00:39:50.184 { 00:39:50.184 "method": "bdev_wait_for_examine" 00:39:50.184 } 00:39:50.184 ] 00:39:50.184 } 00:39:50.184 ] 00:39:50.184 } 00:39:50.184 [2024-07-13 11:51:24.689210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:50.184 [2024-07-13 11:51:24.689425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167020 ] 00:39:50.184 [2024-07-13 11:51:24.860387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:50.442 [2024-07-13 11:51:25.054549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.702  Copying: 1024/1024 [kB] (average 500 MBps) 00:39:51.702 00:39:51.702 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:51.702 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:39:51.702 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:39:51.702 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:39:51.702 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:39:51.702 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:39:51.702 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:52.294 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:39:52.294 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:39:52.294 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:52.294 11:51:26 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:52.294 [2024-07-13 11:51:26.944463] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:52.294 [2024-07-13 11:51:26.945227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167071 ] 00:39:52.294 { 00:39:52.294 "subsystems": [ 00:39:52.294 { 00:39:52.294 "subsystem": "bdev", 00:39:52.294 "config": [ 00:39:52.294 { 00:39:52.294 "params": { 00:39:52.294 "trtype": "pcie", 00:39:52.294 "traddr": "0000:00:10.0", 00:39:52.294 "name": "Nvme0" 00:39:52.294 }, 00:39:52.294 "method": "bdev_nvme_attach_controller" 00:39:52.294 }, 00:39:52.294 { 00:39:52.294 "method": "bdev_wait_for_examine" 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 } 00:39:52.294 ] 00:39:52.294 } 00:39:52.553 [2024-07-13 11:51:27.114947] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.812 [2024-07-13 11:51:27.334911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:54.005  Copying: 60/60 [kB] (average 58 MBps) 00:39:54.005 00:39:54.005 11:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:39:54.005 11:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:39:54.005 11:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:54.005 11:51:28 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:54.263 { 00:39:54.263 "subsystems": [ 00:39:54.263 { 00:39:54.263 "subsystem": "bdev", 00:39:54.263 "config": [ 00:39:54.263 { 00:39:54.263 "params": { 00:39:54.263 "trtype": "pcie", 00:39:54.263 "traddr": "0000:00:10.0", 00:39:54.263 "name": "Nvme0" 00:39:54.263 }, 00:39:54.263 "method": "bdev_nvme_attach_controller" 00:39:54.263 }, 00:39:54.263 { 00:39:54.263 "method": "bdev_wait_for_examine" 00:39:54.263 } 00:39:54.263 ] 00:39:54.263 } 00:39:54.263 ] 00:39:54.263 } 00:39:54.263 [2024-07-13 11:51:28.807487] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:54.263 [2024-07-13 11:51:28.807707] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167095 ] 00:39:54.263 [2024-07-13 11:51:28.979271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:54.521 [2024-07-13 11:51:29.162292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:55.716  Copying: 60/60 [kB] (average 58 MBps) 00:39:55.716 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:55.975 11:51:30 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:55.975 [2024-07-13 11:51:30.548762] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:55.975 [2024-07-13 11:51:30.549181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167123 ] 00:39:55.975 { 00:39:55.975 "subsystems": [ 00:39:55.975 { 00:39:55.975 "subsystem": "bdev", 00:39:55.975 "config": [ 00:39:55.975 { 00:39:55.975 "params": { 00:39:55.975 "trtype": "pcie", 00:39:55.975 "traddr": "0000:00:10.0", 00:39:55.975 "name": "Nvme0" 00:39:55.975 }, 00:39:55.975 "method": "bdev_nvme_attach_controller" 00:39:55.975 }, 00:39:55.975 { 00:39:55.975 "method": "bdev_wait_for_examine" 00:39:55.975 } 00:39:55.975 ] 00:39:55.975 } 00:39:55.975 ] 00:39:55.975 } 00:39:55.975 [2024-07-13 11:51:30.720945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:56.234 [2024-07-13 11:51:30.916769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.732  Copying: 1024/1024 [kB] (average 1000 MBps) 00:39:57.732 00:39:57.732 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:39:57.732 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:39:57.732 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:39:57.732 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:39:57.732 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:39:57.732 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:39:57.732 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:39:57.732 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:58.298 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:39:58.298 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:39:58.298 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:58.298 11:51:32 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:39:58.298 [2024-07-13 11:51:32.825285] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:39:58.298 [2024-07-13 11:51:32.825520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167151 ] 00:39:58.298 { 00:39:58.298 "subsystems": [ 00:39:58.298 { 00:39:58.298 "subsystem": "bdev", 00:39:58.298 "config": [ 00:39:58.298 { 00:39:58.298 "params": { 00:39:58.298 "trtype": "pcie", 00:39:58.298 "traddr": "0000:00:10.0", 00:39:58.298 "name": "Nvme0" 00:39:58.298 }, 00:39:58.298 "method": "bdev_nvme_attach_controller" 00:39:58.298 }, 00:39:58.298 { 00:39:58.298 "method": "bdev_wait_for_examine" 00:39:58.298 } 00:39:58.298 ] 00:39:58.298 } 00:39:58.298 ] 00:39:58.298 } 00:39:58.298 [2024-07-13 11:51:32.996638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:58.556 [2024-07-13 11:51:33.191186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.757  Copying: 56/56 [kB] (average 54 MBps) 00:39:59.757 00:39:59.757 11:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:39:59.757 11:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:39:59.757 11:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:39:59.757 11:51:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:00.015 { 00:40:00.015 "subsystems": [ 00:40:00.015 { 00:40:00.015 "subsystem": "bdev", 00:40:00.015 "config": [ 00:40:00.015 { 00:40:00.015 "params": { 00:40:00.015 "trtype": "pcie", 00:40:00.015 "traddr": "0000:00:10.0", 00:40:00.015 "name": "Nvme0" 00:40:00.015 }, 00:40:00.015 "method": "bdev_nvme_attach_controller" 00:40:00.015 }, 00:40:00.015 { 00:40:00.015 "method": "bdev_wait_for_examine" 00:40:00.015 } 00:40:00.015 ] 00:40:00.015 } 00:40:00.015 ] 00:40:00.015 } 00:40:00.015 [2024-07-13 11:51:34.570831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:00.015 [2024-07-13 11:51:34.571074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167182 ] 00:40:00.015 [2024-07-13 11:51:34.734514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:00.273 [2024-07-13 11:51:34.915736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:01.908  Copying: 56/56 [kB] (average 27 MBps) 00:40:01.908 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:01.908 11:51:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:01.908 { 00:40:01.908 "subsystems": [ 00:40:01.908 { 00:40:01.908 "subsystem": "bdev", 00:40:01.908 "config": [ 00:40:01.908 { 00:40:01.908 "params": { 00:40:01.908 "trtype": "pcie", 00:40:01.908 "traddr": "0000:00:10.0", 00:40:01.908 "name": "Nvme0" 00:40:01.908 }, 00:40:01.908 "method": "bdev_nvme_attach_controller" 00:40:01.908 }, 00:40:01.908 { 00:40:01.908 "method": "bdev_wait_for_examine" 00:40:01.908 } 00:40:01.908 ] 00:40:01.908 } 00:40:01.908 ] 00:40:01.908 } 00:40:01.908 [2024-07-13 11:51:36.423889] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:01.908 [2024-07-13 11:51:36.424101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167225 ] 00:40:01.908 [2024-07-13 11:51:36.592242] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:02.167 [2024-07-13 11:51:36.782388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.363  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:03.363 00:40:03.363 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:03.363 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:40:03.363 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:40:03.363 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:40:03.363 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:40:03.363 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:03.363 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:03.931 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:40:03.932 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:03.932 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:03.932 11:51:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:03.932 [2024-07-13 11:51:38.593204] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:03.932 [2024-07-13 11:51:38.593469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167253 ] 00:40:03.932 { 00:40:03.932 "subsystems": [ 00:40:03.932 { 00:40:03.932 "subsystem": "bdev", 00:40:03.932 "config": [ 00:40:03.932 { 00:40:03.932 "params": { 00:40:03.932 "trtype": "pcie", 00:40:03.932 "traddr": "0000:00:10.0", 00:40:03.932 "name": "Nvme0" 00:40:03.932 }, 00:40:03.932 "method": "bdev_nvme_attach_controller" 00:40:03.932 }, 00:40:03.932 { 00:40:03.932 "method": "bdev_wait_for_examine" 00:40:03.932 } 00:40:03.932 ] 00:40:03.932 } 00:40:03.932 ] 00:40:03.932 } 00:40:04.191 [2024-07-13 11:51:38.764316] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:04.450 [2024-07-13 11:51:38.960240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.642  Copying: 56/56 [kB] (average 54 MBps) 00:40:05.642 00:40:05.642 11:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:40:05.642 11:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:05.642 11:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:05.642 11:51:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:05.900 [2024-07-13 11:51:40.430077] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:05.900 [2024-07-13 11:51:40.430291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167284 ] 00:40:05.900 { 00:40:05.900 "subsystems": [ 00:40:05.900 { 00:40:05.900 "subsystem": "bdev", 00:40:05.900 "config": [ 00:40:05.900 { 00:40:05.900 "params": { 00:40:05.900 "trtype": "pcie", 00:40:05.900 "traddr": "0000:00:10.0", 00:40:05.900 "name": "Nvme0" 00:40:05.900 }, 00:40:05.900 "method": "bdev_nvme_attach_controller" 00:40:05.900 }, 00:40:05.900 { 00:40:05.900 "method": "bdev_wait_for_examine" 00:40:05.900 } 00:40:05.900 ] 00:40:05.900 } 00:40:05.900 ] 00:40:05.900 } 00:40:05.900 [2024-07-13 11:51:40.599861] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.158 [2024-07-13 11:51:40.780698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.352  Copying: 56/56 [kB] (average 54 MBps) 00:40:07.352 00:40:07.352 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:07.352 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:40:07.353 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:07.353 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:07.353 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:40:07.353 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:07.353 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:07.353 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:07.353 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:07.353 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:07.353 11:51:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:07.612 { 00:40:07.612 "subsystems": [ 00:40:07.612 { 00:40:07.612 "subsystem": "bdev", 00:40:07.612 "config": [ 00:40:07.612 { 00:40:07.612 "params": { 00:40:07.612 "trtype": "pcie", 00:40:07.612 "traddr": "0000:00:10.0", 00:40:07.612 "name": "Nvme0" 00:40:07.612 }, 00:40:07.612 "method": "bdev_nvme_attach_controller" 00:40:07.612 }, 00:40:07.612 { 00:40:07.612 "method": "bdev_wait_for_examine" 00:40:07.612 } 00:40:07.612 ] 00:40:07.612 } 00:40:07.612 ] 00:40:07.612 } 00:40:07.612 [2024-07-13 11:51:42.165105] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:07.612 [2024-07-13 11:51:42.165315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167305 ] 00:40:07.612 [2024-07-13 11:51:42.336091] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.870 [2024-07-13 11:51:42.527242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:09.369  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:09.369 00:40:09.369 11:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:40:09.369 11:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:09.369 11:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:40:09.369 11:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:40:09.369 11:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:40:09.369 11:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:40:09.369 11:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:09.369 11:51:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:09.627 11:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:40:09.627 11:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:09.627 11:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:09.627 11:51:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:09.627 [2024-07-13 11:51:44.359950] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:09.627 [2024-07-13 11:51:44.360173] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167340 ] 00:40:09.627 { 00:40:09.627 "subsystems": [ 00:40:09.627 { 00:40:09.627 "subsystem": "bdev", 00:40:09.627 "config": [ 00:40:09.627 { 00:40:09.627 "params": { 00:40:09.627 "trtype": "pcie", 00:40:09.627 "traddr": "0000:00:10.0", 00:40:09.627 "name": "Nvme0" 00:40:09.627 }, 00:40:09.627 "method": "bdev_nvme_attach_controller" 00:40:09.627 }, 00:40:09.627 { 00:40:09.627 "method": "bdev_wait_for_examine" 00:40:09.627 } 00:40:09.627 ] 00:40:09.627 } 00:40:09.627 ] 00:40:09.627 } 00:40:09.885 [2024-07-13 11:51:44.528913] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:10.143 [2024-07-13 11:51:44.736077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:11.334  Copying: 48/48 [kB] (average 46 MBps) 00:40:11.334 00:40:11.334 11:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:40:11.334 11:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:11.334 11:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:11.334 11:51:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:11.592 [2024-07-13 11:51:46.111113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:11.592 [2024-07-13 11:51:46.111379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167370 ] 00:40:11.592 { 00:40:11.592 "subsystems": [ 00:40:11.592 { 00:40:11.592 "subsystem": "bdev", 00:40:11.592 "config": [ 00:40:11.592 { 00:40:11.592 "params": { 00:40:11.592 "trtype": "pcie", 00:40:11.592 "traddr": "0000:00:10.0", 00:40:11.592 "name": "Nvme0" 00:40:11.592 }, 00:40:11.592 "method": "bdev_nvme_attach_controller" 00:40:11.592 }, 00:40:11.592 { 00:40:11.592 "method": "bdev_wait_for_examine" 00:40:11.592 } 00:40:11.592 ] 00:40:11.592 } 00:40:11.592 ] 00:40:11.592 } 00:40:11.592 [2024-07-13 11:51:46.282561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.850 [2024-07-13 11:51:46.466517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.485  Copying: 48/48 [kB] (average 46 MBps) 00:40:13.485 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:13.485 11:51:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:13.485 { 00:40:13.485 "subsystems": [ 00:40:13.485 { 00:40:13.485 "subsystem": "bdev", 00:40:13.485 "config": [ 00:40:13.485 { 00:40:13.485 "params": { 00:40:13.485 "trtype": "pcie", 00:40:13.485 "traddr": "0000:00:10.0", 00:40:13.485 "name": "Nvme0" 00:40:13.485 }, 00:40:13.485 "method": "bdev_nvme_attach_controller" 00:40:13.485 }, 00:40:13.485 { 00:40:13.485 "method": "bdev_wait_for_examine" 00:40:13.485 } 00:40:13.485 ] 00:40:13.485 } 00:40:13.485 ] 00:40:13.485 } 00:40:13.485 [2024-07-13 11:51:47.934817] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:13.485 [2024-07-13 11:51:47.935053] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167407 ] 00:40:13.485 [2024-07-13 11:51:48.104323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.744 [2024-07-13 11:51:48.290731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:14.939  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:14.939 00:40:14.939 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:40:14.939 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:40:14.939 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:40:14.939 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:40:14.939 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:40:14.939 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:40:14.939 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:15.506 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:40:15.506 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:40:15.506 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:15.506 11:51:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:15.507 [2024-07-13 11:51:50.045560] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:15.507 [2024-07-13 11:51:50.045759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167435 ] 00:40:15.507 { 00:40:15.507 "subsystems": [ 00:40:15.507 { 00:40:15.507 "subsystem": "bdev", 00:40:15.507 "config": [ 00:40:15.507 { 00:40:15.507 "params": { 00:40:15.507 "trtype": "pcie", 00:40:15.507 "traddr": "0000:00:10.0", 00:40:15.507 "name": "Nvme0" 00:40:15.507 }, 00:40:15.507 "method": "bdev_nvme_attach_controller" 00:40:15.507 }, 00:40:15.507 { 00:40:15.507 "method": "bdev_wait_for_examine" 00:40:15.507 } 00:40:15.507 ] 00:40:15.507 } 00:40:15.507 ] 00:40:15.507 } 00:40:15.507 [2024-07-13 11:51:50.216378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:15.765 [2024-07-13 11:51:50.414784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:17.401  Copying: 48/48 [kB] (average 46 MBps) 00:40:17.401 00:40:17.401 11:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:40:17.401 11:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:40:17.401 11:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:17.401 11:51:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:17.401 [2024-07-13 11:51:51.885986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:17.401 [2024-07-13 11:51:51.886189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167462 ] 00:40:17.401 { 00:40:17.401 "subsystems": [ 00:40:17.401 { 00:40:17.401 "subsystem": "bdev", 00:40:17.401 "config": [ 00:40:17.401 { 00:40:17.401 "params": { 00:40:17.401 "trtype": "pcie", 00:40:17.401 "traddr": "0000:00:10.0", 00:40:17.401 "name": "Nvme0" 00:40:17.401 }, 00:40:17.401 "method": "bdev_nvme_attach_controller" 00:40:17.401 }, 00:40:17.401 { 00:40:17.401 "method": "bdev_wait_for_examine" 00:40:17.401 } 00:40:17.401 ] 00:40:17.401 } 00:40:17.401 ] 00:40:17.401 } 00:40:17.401 [2024-07-13 11:51:52.056458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.660 [2024-07-13 11:51:52.258498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:18.853  Copying: 48/48 [kB] (average 46 MBps) 00:40:18.853 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:18.853 11:51:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:19.111 { 00:40:19.111 "subsystems": [ 00:40:19.111 { 00:40:19.111 "subsystem": "bdev", 00:40:19.111 "config": [ 00:40:19.111 { 00:40:19.112 "params": { 00:40:19.112 "trtype": "pcie", 00:40:19.112 "traddr": "0000:00:10.0", 00:40:19.112 "name": "Nvme0" 00:40:19.112 }, 00:40:19.112 "method": "bdev_nvme_attach_controller" 00:40:19.112 }, 00:40:19.112 { 00:40:19.112 "method": "bdev_wait_for_examine" 00:40:19.112 } 00:40:19.112 ] 00:40:19.112 } 00:40:19.112 ] 00:40:19.112 } 00:40:19.112 [2024-07-13 11:51:53.642525] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:19.112 [2024-07-13 11:51:53.642736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167494 ] 00:40:19.112 [2024-07-13 11:51:53.808795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:19.370 [2024-07-13 11:51:53.988490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:21.061  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:21.061 00:40:21.061 ************************************ 00:40:21.061 END TEST dd_rw 00:40:21.061 ************************************ 00:40:21.061 00:40:21.061 real 0m34.905s 00:40:21.061 user 0m28.234s 00:40:21.061 sys 0m5.479s 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:21.061 ************************************ 00:40:21.061 START TEST dd_rw_offset 00:40:21.061 ************************************ 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=kg77k300hqdrfcvbyknqn8g0v2dwg91rvvs8cs6gs7jb5uss363v1ckjaec49m0we00pnsxxiaitragaspfyt8wf42l42y6th2kta6xbziyf2nea0lhted8rhah3wkogai88fcj586xr8i4kub0ey5zyuac3m1ef53s29f7j1hjwy19whzbew3egj9j3j0rdozap3gvpwww6ry05x57oh8c9bosbp6dp2h2cyacf2tbhjtluwc28kqnv574pdxrjqpgw6vw2oaong3pskms53uerx1bb1tplsc2pwith135k17sbrr5mh6tv9qrsjg9pv4y75h9phldfb6m1wz2fll5s6ap0to5h9dtz2yqsve2u18luvhj4057egdihhnzsr27hxnwz6tqhfqf27h7h5a5gof588r98x9055lta4lqvn7b742mgaft2pr9lu1kjjeqk0sumohy4kxnrwh0y5uhyv9b313a8xyd72mpzbm769cry68bc54ig3pxhckf39sjsnldr3h4j2kduipjfb6o13xivvu974bi5kkfxovungtlyffd8arxr2ylsct75w8nd0wg4pmou7z2wilhm02ugw1v8rpjumotddhmt21fmhv8m79xxavba3nnmzfpw43dxlcpnyit7ormp2euij2b3m9jv6mtjnisp4bsb76qkg87s1j5z4gu4wo5pavivbdip86n4cbetin0ky03tytk1ygwiayc5qm1twcn1j4cajcn0l67srw41tw9z97dqf68kh2shodl567ov1bd4ri9b6z87l9ikwcszo2gvu2strb4x5uvtcl6d1t6ghzxjf1vr25mbgnuvbkowldhwvhk5kr578pzb23lxva7zx07hg7p3xmywim6qc0wkvt9dnv6o0y3bvmevymn01kvynu86sm57sivxxenye5epbubpydyia4ni3kevyxkxhoxl4v7qop76mn34uogunrcw61ftja0682rbbevjrv92buqwyx9pz3s70vvapzyknj0fchpemuvcqwklpbjemq9el3yeerxbgxq7slcs8jtt5qhl6os5yh3s3dtmxwtwwkq4312sk09rsll5mahyadz7hd3vn26sumi51orwualgl7pvr993gw1go0v65gsz0iggfnvr9xsatletxdeeneyxnin2eiwdmy62dmrcb91k00zsc89thsrxtg5nv8oj7bm3lg6bnpzmekrt8iu71zmetye30gl2k7x9j21mq7x976vrnsqyhfe65ogvho1un7b1995p0revvcxk4jmasbzioxd6ulv1xl5p8mg2jys1npfl8h669m0ta9ccdbms4j4afi7znoqcj0345qela8o5g2v1a3nuity6oopkxrzuf3zj467uzu1izbnyfzdjcn5nrkumzwgmnlj7gpa24gtrmq9ilrl2uyvaoe346xzlvq7uf3yh86cw06y8h800axipwqwyzbkpjl343zlhp9nspflnbg3dmzl57jvyek8f8vt800dnoc0x5dlwswplp08dq5e9aqzjxyzihp6dcczuacmbvo5ntxk0eqgw39fitzsbaxmi2lvlfmp33qsb1lsvr86cp6no37krheurtugk2a8fks0ayev2p29bcpkj7cpm3n58marsqzv7zotyamx10c75alwxcssf1upe6ttie7h1sk1nqkoqxi1v82pcta23sp8xrkdv6ryh61ju0omx5fopwqjp10dbix97be4fvmkuxs18z13gex6hygvf8oz7rmdzjrt9v1kwd3lkiu5m7v6yqt5i3uzfhhne93twx8hl5i3avngzlooglfqanrigv1jfkxzdyszelt1mdyamhh5m1yc4ribaees5c4dlhcp2nshkotuosv2fecegzilg5ayt5cugffm2hgwksr9x6zw3r5c26dtf02keawkp7hq3faawb4u3ck076qxwoto8fxs5ndjsicc1n2aedhpxcwuiek76to2odgilaxqy8llcil3s4kwzl36jo8jeze7p5calscb79dt2ocl6eh8pv013geb8f6dl1wgm42zuttzhh5hni48d4ib95aanouaq2n9351wlga39obaz0umkaft23brcoo5smaje6zt4ubng61ikvz6mbqalori9qa1ket2m5sauhridlbvo4xfv2jd795dxzpor0h3gjsuxtnurzrl8qgrwaaivve7hs26kq51jrnzgl2pfptjq04xd1zhmcjnn0h3ymattz7fxy4u0qu8frqc1sq4wobuv17n9ayq4qghbusff4ms699w0ew0vz2uuhw80ml11u5s1txqdf5xq8n44f9f96vrby6tgekfre21ae8nzuqadzupgbs9agvr68rk7wmtsoog91xgfxxmahq92xq9i06mxe3mdx5oa87krcg2tc2m4r4xnkq6mdbqmaw8saukt566ed3j59asx79yhpen1p4g4nckjefg7q8xb4b6sjg7943ntdxf7qi2pf6ugehfnzxc284jzpd86vtrbk3y80gmlwynk92w0nmp9wbsd5u62u5xi2dr9puacwh6kfbp73nn356nitaohjrytix0osd3x5rwzur3ijx8m544n2hffwt3ex61xvuiql6y7df3cdozpfv96yqtw8rhwigaq8pfyy40zzgwvk0swj469mgitmvzf5wguekwamf6od59drp6x230p8wbybuso77m2idq92t9i9y0zzawx205lrz1b9zrkwyc4lhr1qz47xs2xul7ca7aervu8zcrcky69fqeuchfbg7a8utrlb6psl6r16ckmjngpfjryp6dnyo6wj8dv7u4akiy7lher94ttat564i4uajfu3ij4ogb9q6b4aukezcsvmnfjuhcy7vee5avtgh6o6kzx14g74jefhhmrul97v7p1vipymrmg36ac7v8x9a0k3yaqz6kfzb9r6mg2s7talcjuz0i5566vxqt1948xhulfibvo7d8j0fmr2ozlma1q1egt307sw23oxehc9xhbqeitq8xsq0jalan3zerh94w4cep49b1soo2ye8e4it4egjr1gpawx92f894kdw5q796iuowqveo8ayaolj47nrdm8cbzhrgdiqxhjxwmrilcb5ezcak1ir7yr05lb70acbsazibazfv79xyjpugll1wxbs5npbu5thyegpx2euuvihti0yf6lnhnnl4e093fi1xhpmathsc48kz08hliy2f8yo5gurjh00mf80zr9ht61iidr7q0trum851jdjgxlc3tkcnbibvw36h0hl6wwjr67o12e9z1wx2fpsvh9w5l4fkt6gr3juok7bgl9g1me1vsjgicwmztzkxx0gngnzy3fx1w6mse2sfd1wsx8zgbx2w4trau7l5b4wfrwzj4y00vxw179xertfp52mkh880p8wcp12eiovzpf2fj6k09jsw53r23svvjfhisrgwod1uzl6hwthe9x3nk0t3ismzule3yzrm6m1b598zs7jsdkp6x86dlhpqt6swfxr2xhmlp3kxb37dxw49jae02c8cz2rosdtn9yys6ezack85o6u2r6x24gu48z87eqfy56bbt9v296ftp58cf343wwv9i3ho2splh2lvfyakoqscl3ia790d3ktplo1vagp0gpgy2dhmr2izlmexs3inb4whakt4uq1n069pzb9z4quh188vgpiqnv4z810d0hc022qdvpgii5rvwmvldx9alp66xdciixtkha8alq87ycn131bnmbho0yaj13xoj1bzy5q4983mg2wylpofuyc09dabli7lm5nm4etoa9uiwfii6fe1668olhhwvt33ia5rtoidxkegbe6negusfq83u1wm8b5m4nyukzir0jpeodaim31d6m5ggadpvxbdyfzonen9yicneh8jiikesv5uees9t49f8t3iguovhlsyxxxkmv5x4lw8bjwmxp1vjbyxib1rk8qd5rilk04kfu4qps0gqzonmwwzjll4tvvsipmu1u0kd4w2wt3xnbelfwgv3mvupqlulazpt024kpbyug6yyjctcd0btk6b6xsom 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:40:21.061 11:51:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:21.061 { 00:40:21.061 "subsystems": [ 00:40:21.061 { 00:40:21.061 "subsystem": "bdev", 00:40:21.061 "config": [ 00:40:21.061 { 00:40:21.061 "params": { 00:40:21.061 "trtype": "pcie", 00:40:21.061 "traddr": "0000:00:10.0", 00:40:21.061 "name": "Nvme0" 00:40:21.061 }, 00:40:21.061 "method": "bdev_nvme_attach_controller" 00:40:21.061 }, 00:40:21.061 { 00:40:21.061 "method": "bdev_wait_for_examine" 00:40:21.061 } 00:40:21.061 ] 00:40:21.061 } 00:40:21.061 ] 00:40:21.061 } 00:40:21.061 [2024-07-13 11:51:55.572205] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:21.061 [2024-07-13 11:51:55.572425] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167544 ] 00:40:21.061 [2024-07-13 11:51:55.743896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:21.348 [2024-07-13 11:51:55.941286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.547  Copying: 4096/4096 [B] (average 4000 kBps) 00:40:22.547 00:40:22.547 11:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:40:22.547 11:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:40:22.547 11:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:40:22.547 11:51:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:22.805 { 00:40:22.805 "subsystems": [ 00:40:22.805 { 00:40:22.805 "subsystem": "bdev", 00:40:22.805 "config": [ 00:40:22.805 { 00:40:22.805 "params": { 00:40:22.805 "trtype": "pcie", 00:40:22.805 "traddr": "0000:00:10.0", 00:40:22.805 "name": "Nvme0" 00:40:22.805 }, 00:40:22.805 "method": "bdev_nvme_attach_controller" 00:40:22.805 }, 00:40:22.805 { 00:40:22.805 "method": "bdev_wait_for_examine" 00:40:22.805 } 00:40:22.805 ] 00:40:22.805 } 00:40:22.805 ] 00:40:22.805 } 00:40:22.805 [2024-07-13 11:51:57.335607] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:22.805 [2024-07-13 11:51:57.335841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167581 ] 00:40:22.805 [2024-07-13 11:51:57.508077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:23.064 [2024-07-13 11:51:57.702467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:24.701  Copying: 4096/4096 [B] (average 4000 kBps) 00:40:24.701 00:40:24.701 11:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ kg77k300hqdrfcvbyknqn8g0v2dwg91rvvs8cs6gs7jb5uss363v1ckjaec49m0we00pnsxxiaitragaspfyt8wf42l42y6th2kta6xbziyf2nea0lhted8rhah3wkogai88fcj586xr8i4kub0ey5zyuac3m1ef53s29f7j1hjwy19whzbew3egj9j3j0rdozap3gvpwww6ry05x57oh8c9bosbp6dp2h2cyacf2tbhjtluwc28kqnv574pdxrjqpgw6vw2oaong3pskms53uerx1bb1tplsc2pwith135k17sbrr5mh6tv9qrsjg9pv4y75h9phldfb6m1wz2fll5s6ap0to5h9dtz2yqsve2u18luvhj4057egdihhnzsr27hxnwz6tqhfqf27h7h5a5gof588r98x9055lta4lqvn7b742mgaft2pr9lu1kjjeqk0sumohy4kxnrwh0y5uhyv9b313a8xyd72mpzbm769cry68bc54ig3pxhckf39sjsnldr3h4j2kduipjfb6o13xivvu974bi5kkfxovungtlyffd8arxr2ylsct75w8nd0wg4pmou7z2wilhm02ugw1v8rpjumotddhmt21fmhv8m79xxavba3nnmzfpw43dxlcpnyit7ormp2euij2b3m9jv6mtjnisp4bsb76qkg87s1j5z4gu4wo5pavivbdip86n4cbetin0ky03tytk1ygwiayc5qm1twcn1j4cajcn0l67srw41tw9z97dqf68kh2shodl567ov1bd4ri9b6z87l9ikwcszo2gvu2strb4x5uvtcl6d1t6ghzxjf1vr25mbgnuvbkowldhwvhk5kr578pzb23lxva7zx07hg7p3xmywim6qc0wkvt9dnv6o0y3bvmevymn01kvynu86sm57sivxxenye5epbubpydyia4ni3kevyxkxhoxl4v7qop76mn34uogunrcw61ftja0682rbbevjrv92buqwyx9pz3s70vvapzyknj0fchpemuvcqwklpbjemq9el3yeerxbgxq7slcs8jtt5qhl6os5yh3s3dtmxwtwwkq4312sk09rsll5mahyadz7hd3vn26sumi51orwualgl7pvr993gw1go0v65gsz0iggfnvr9xsatletxdeeneyxnin2eiwdmy62dmrcb91k00zsc89thsrxtg5nv8oj7bm3lg6bnpzmekrt8iu71zmetye30gl2k7x9j21mq7x976vrnsqyhfe65ogvho1un7b1995p0revvcxk4jmasbzioxd6ulv1xl5p8mg2jys1npfl8h669m0ta9ccdbms4j4afi7znoqcj0345qela8o5g2v1a3nuity6oopkxrzuf3zj467uzu1izbnyfzdjcn5nrkumzwgmnlj7gpa24gtrmq9ilrl2uyvaoe346xzlvq7uf3yh86cw06y8h800axipwqwyzbkpjl343zlhp9nspflnbg3dmzl57jvyek8f8vt800dnoc0x5dlwswplp08dq5e9aqzjxyzihp6dcczuacmbvo5ntxk0eqgw39fitzsbaxmi2lvlfmp33qsb1lsvr86cp6no37krheurtugk2a8fks0ayev2p29bcpkj7cpm3n58marsqzv7zotyamx10c75alwxcssf1upe6ttie7h1sk1nqkoqxi1v82pcta23sp8xrkdv6ryh61ju0omx5fopwqjp10dbix97be4fvmkuxs18z13gex6hygvf8oz7rmdzjrt9v1kwd3lkiu5m7v6yqt5i3uzfhhne93twx8hl5i3avngzlooglfqanrigv1jfkxzdyszelt1mdyamhh5m1yc4ribaees5c4dlhcp2nshkotuosv2fecegzilg5ayt5cugffm2hgwksr9x6zw3r5c26dtf02keawkp7hq3faawb4u3ck076qxwoto8fxs5ndjsicc1n2aedhpxcwuiek76to2odgilaxqy8llcil3s4kwzl36jo8jeze7p5calscb79dt2ocl6eh8pv013geb8f6dl1wgm42zuttzhh5hni48d4ib95aanouaq2n9351wlga39obaz0umkaft23brcoo5smaje6zt4ubng61ikvz6mbqalori9qa1ket2m5sauhridlbvo4xfv2jd795dxzpor0h3gjsuxtnurzrl8qgrwaaivve7hs26kq51jrnzgl2pfptjq04xd1zhmcjnn0h3ymattz7fxy4u0qu8frqc1sq4wobuv17n9ayq4qghbusff4ms699w0ew0vz2uuhw80ml11u5s1txqdf5xq8n44f9f96vrby6tgekfre21ae8nzuqadzupgbs9agvr68rk7wmtsoog91xgfxxmahq92xq9i06mxe3mdx5oa87krcg2tc2m4r4xnkq6mdbqmaw8saukt566ed3j59asx79yhpen1p4g4nckjefg7q8xb4b6sjg7943ntdxf7qi2pf6ugehfnzxc284jzpd86vtrbk3y80gmlwynk92w0nmp9wbsd5u62u5xi2dr9puacwh6kfbp73nn356nitaohjrytix0osd3x5rwzur3ijx8m544n2hffwt3ex61xvuiql6y7df3cdozpfv96yqtw8rhwigaq8pfyy40zzgwvk0swj469mgitmvzf5wguekwamf6od59drp6x230p8wbybuso77m2idq92t9i9y0zzawx205lrz1b9zrkwyc4lhr1qz47xs2xul7ca7aervu8zcrcky69fqeuchfbg7a8utrlb6psl6r16ckmjngpfjryp6dnyo6wj8dv7u4akiy7lher94ttat564i4uajfu3ij4ogb9q6b4aukezcsvmnfjuhcy7vee5avtgh6o6kzx14g74jefhhmrul97v7p1vipymrmg36ac7v8x9a0k3yaqz6kfzb9r6mg2s7talcjuz0i5566vxqt1948xhulfibvo7d8j0fmr2ozlma1q1egt307sw23oxehc9xhbqeitq8xsq0jalan3zerh94w4cep49b1soo2ye8e4it4egjr1gpawx92f894kdw5q796iuowqveo8ayaolj47nrdm8cbzhrgdiqxhjxwmrilcb5ezcak1ir7yr05lb70acbsazibazfv79xyjpugll1wxbs5npbu5thyegpx2euuvihti0yf6lnhnnl4e093fi1xhpmathsc48kz08hliy2f8yo5gurjh00mf80zr9ht61iidr7q0trum851jdjgxlc3tkcnbibvw36h0hl6wwjr67o12e9z1wx2fpsvh9w5l4fkt6gr3juok7bgl9g1me1vsjgicwmztzkxx0gngnzy3fx1w6mse2sfd1wsx8zgbx2w4trau7l5b4wfrwzj4y00vxw179xertfp52mkh880p8wcp12eiovzpf2fj6k09jsw53r23svvjfhisrgwod1uzl6hwthe9x3nk0t3ismzule3yzrm6m1b598zs7jsdkp6x86dlhpqt6swfxr2xhmlp3kxb37dxw49jae02c8cz2rosdtn9yys6ezack85o6u2r6x24gu48z87eqfy56bbt9v296ftp58cf343wwv9i3ho2splh2lvfyakoqscl3ia790d3ktplo1vagp0gpgy2dhmr2izlmexs3inb4whakt4uq1n069pzb9z4quh188vgpiqnv4z810d0hc022qdvpgii5rvwmvldx9alp66xdciixtkha8alq87ycn131bnmbho0yaj13xoj1bzy5q4983mg2wylpofuyc09dabli7lm5nm4etoa9uiwfii6fe1668olhhwvt33ia5rtoidxkegbe6negusfq83u1wm8b5m4nyukzir0jpeodaim31d6m5ggadpvxbdyfzonen9yicneh8jiikesv5uees9t49f8t3iguovhlsyxxxkmv5x4lw8bjwmxp1vjbyxib1rk8qd5rilk04kfu4qps0gqzonmwwzjll4tvvsipmu1u0kd4w2wt3xnbelfwgv3mvupqlulazpt024kpbyug6yyjctcd0btk6b6xsom == \k\g\7\7\k\3\0\0\h\q\d\r\f\c\v\b\y\k\n\q\n\8\g\0\v\2\d\w\g\9\1\r\v\v\s\8\c\s\6\g\s\7\j\b\5\u\s\s\3\6\3\v\1\c\k\j\a\e\c\4\9\m\0\w\e\0\0\p\n\s\x\x\i\a\i\t\r\a\g\a\s\p\f\y\t\8\w\f\4\2\l\4\2\y\6\t\h\2\k\t\a\6\x\b\z\i\y\f\2\n\e\a\0\l\h\t\e\d\8\r\h\a\h\3\w\k\o\g\a\i\8\8\f\c\j\5\8\6\x\r\8\i\4\k\u\b\0\e\y\5\z\y\u\a\c\3\m\1\e\f\5\3\s\2\9\f\7\j\1\h\j\w\y\1\9\w\h\z\b\e\w\3\e\g\j\9\j\3\j\0\r\d\o\z\a\p\3\g\v\p\w\w\w\6\r\y\0\5\x\5\7\o\h\8\c\9\b\o\s\b\p\6\d\p\2\h\2\c\y\a\c\f\2\t\b\h\j\t\l\u\w\c\2\8\k\q\n\v\5\7\4\p\d\x\r\j\q\p\g\w\6\v\w\2\o\a\o\n\g\3\p\s\k\m\s\5\3\u\e\r\x\1\b\b\1\t\p\l\s\c\2\p\w\i\t\h\1\3\5\k\1\7\s\b\r\r\5\m\h\6\t\v\9\q\r\s\j\g\9\p\v\4\y\7\5\h\9\p\h\l\d\f\b\6\m\1\w\z\2\f\l\l\5\s\6\a\p\0\t\o\5\h\9\d\t\z\2\y\q\s\v\e\2\u\1\8\l\u\v\h\j\4\0\5\7\e\g\d\i\h\h\n\z\s\r\2\7\h\x\n\w\z\6\t\q\h\f\q\f\2\7\h\7\h\5\a\5\g\o\f\5\8\8\r\9\8\x\9\0\5\5\l\t\a\4\l\q\v\n\7\b\7\4\2\m\g\a\f\t\2\p\r\9\l\u\1\k\j\j\e\q\k\0\s\u\m\o\h\y\4\k\x\n\r\w\h\0\y\5\u\h\y\v\9\b\3\1\3\a\8\x\y\d\7\2\m\p\z\b\m\7\6\9\c\r\y\6\8\b\c\5\4\i\g\3\p\x\h\c\k\f\3\9\s\j\s\n\l\d\r\3\h\4\j\2\k\d\u\i\p\j\f\b\6\o\1\3\x\i\v\v\u\9\7\4\b\i\5\k\k\f\x\o\v\u\n\g\t\l\y\f\f\d\8\a\r\x\r\2\y\l\s\c\t\7\5\w\8\n\d\0\w\g\4\p\m\o\u\7\z\2\w\i\l\h\m\0\2\u\g\w\1\v\8\r\p\j\u\m\o\t\d\d\h\m\t\2\1\f\m\h\v\8\m\7\9\x\x\a\v\b\a\3\n\n\m\z\f\p\w\4\3\d\x\l\c\p\n\y\i\t\7\o\r\m\p\2\e\u\i\j\2\b\3\m\9\j\v\6\m\t\j\n\i\s\p\4\b\s\b\7\6\q\k\g\8\7\s\1\j\5\z\4\g\u\4\w\o\5\p\a\v\i\v\b\d\i\p\8\6\n\4\c\b\e\t\i\n\0\k\y\0\3\t\y\t\k\1\y\g\w\i\a\y\c\5\q\m\1\t\w\c\n\1\j\4\c\a\j\c\n\0\l\6\7\s\r\w\4\1\t\w\9\z\9\7\d\q\f\6\8\k\h\2\s\h\o\d\l\5\6\7\o\v\1\b\d\4\r\i\9\b\6\z\8\7\l\9\i\k\w\c\s\z\o\2\g\v\u\2\s\t\r\b\4\x\5\u\v\t\c\l\6\d\1\t\6\g\h\z\x\j\f\1\v\r\2\5\m\b\g\n\u\v\b\k\o\w\l\d\h\w\v\h\k\5\k\r\5\7\8\p\z\b\2\3\l\x\v\a\7\z\x\0\7\h\g\7\p\3\x\m\y\w\i\m\6\q\c\0\w\k\v\t\9\d\n\v\6\o\0\y\3\b\v\m\e\v\y\m\n\0\1\k\v\y\n\u\8\6\s\m\5\7\s\i\v\x\x\e\n\y\e\5\e\p\b\u\b\p\y\d\y\i\a\4\n\i\3\k\e\v\y\x\k\x\h\o\x\l\4\v\7\q\o\p\7\6\m\n\3\4\u\o\g\u\n\r\c\w\6\1\f\t\j\a\0\6\8\2\r\b\b\e\v\j\r\v\9\2\b\u\q\w\y\x\9\p\z\3\s\7\0\v\v\a\p\z\y\k\n\j\0\f\c\h\p\e\m\u\v\c\q\w\k\l\p\b\j\e\m\q\9\e\l\3\y\e\e\r\x\b\g\x\q\7\s\l\c\s\8\j\t\t\5\q\h\l\6\o\s\5\y\h\3\s\3\d\t\m\x\w\t\w\w\k\q\4\3\1\2\s\k\0\9\r\s\l\l\5\m\a\h\y\a\d\z\7\h\d\3\v\n\2\6\s\u\m\i\5\1\o\r\w\u\a\l\g\l\7\p\v\r\9\9\3\g\w\1\g\o\0\v\6\5\g\s\z\0\i\g\g\f\n\v\r\9\x\s\a\t\l\e\t\x\d\e\e\n\e\y\x\n\i\n\2\e\i\w\d\m\y\6\2\d\m\r\c\b\9\1\k\0\0\z\s\c\8\9\t\h\s\r\x\t\g\5\n\v\8\o\j\7\b\m\3\l\g\6\b\n\p\z\m\e\k\r\t\8\i\u\7\1\z\m\e\t\y\e\3\0\g\l\2\k\7\x\9\j\2\1\m\q\7\x\9\7\6\v\r\n\s\q\y\h\f\e\6\5\o\g\v\h\o\1\u\n\7\b\1\9\9\5\p\0\r\e\v\v\c\x\k\4\j\m\a\s\b\z\i\o\x\d\6\u\l\v\1\x\l\5\p\8\m\g\2\j\y\s\1\n\p\f\l\8\h\6\6\9\m\0\t\a\9\c\c\d\b\m\s\4\j\4\a\f\i\7\z\n\o\q\c\j\0\3\4\5\q\e\l\a\8\o\5\g\2\v\1\a\3\n\u\i\t\y\6\o\o\p\k\x\r\z\u\f\3\z\j\4\6\7\u\z\u\1\i\z\b\n\y\f\z\d\j\c\n\5\n\r\k\u\m\z\w\g\m\n\l\j\7\g\p\a\2\4\g\t\r\m\q\9\i\l\r\l\2\u\y\v\a\o\e\3\4\6\x\z\l\v\q\7\u\f\3\y\h\8\6\c\w\0\6\y\8\h\8\0\0\a\x\i\p\w\q\w\y\z\b\k\p\j\l\3\4\3\z\l\h\p\9\n\s\p\f\l\n\b\g\3\d\m\z\l\5\7\j\v\y\e\k\8\f\8\v\t\8\0\0\d\n\o\c\0\x\5\d\l\w\s\w\p\l\p\0\8\d\q\5\e\9\a\q\z\j\x\y\z\i\h\p\6\d\c\c\z\u\a\c\m\b\v\o\5\n\t\x\k\0\e\q\g\w\3\9\f\i\t\z\s\b\a\x\m\i\2\l\v\l\f\m\p\3\3\q\s\b\1\l\s\v\r\8\6\c\p\6\n\o\3\7\k\r\h\e\u\r\t\u\g\k\2\a\8\f\k\s\0\a\y\e\v\2\p\2\9\b\c\p\k\j\7\c\p\m\3\n\5\8\m\a\r\s\q\z\v\7\z\o\t\y\a\m\x\1\0\c\7\5\a\l\w\x\c\s\s\f\1\u\p\e\6\t\t\i\e\7\h\1\s\k\1\n\q\k\o\q\x\i\1\v\8\2\p\c\t\a\2\3\s\p\8\x\r\k\d\v\6\r\y\h\6\1\j\u\0\o\m\x\5\f\o\p\w\q\j\p\1\0\d\b\i\x\9\7\b\e\4\f\v\m\k\u\x\s\1\8\z\1\3\g\e\x\6\h\y\g\v\f\8\o\z\7\r\m\d\z\j\r\t\9\v\1\k\w\d\3\l\k\i\u\5\m\7\v\6\y\q\t\5\i\3\u\z\f\h\h\n\e\9\3\t\w\x\8\h\l\5\i\3\a\v\n\g\z\l\o\o\g\l\f\q\a\n\r\i\g\v\1\j\f\k\x\z\d\y\s\z\e\l\t\1\m\d\y\a\m\h\h\5\m\1\y\c\4\r\i\b\a\e\e\s\5\c\4\d\l\h\c\p\2\n\s\h\k\o\t\u\o\s\v\2\f\e\c\e\g\z\i\l\g\5\a\y\t\5\c\u\g\f\f\m\2\h\g\w\k\s\r\9\x\6\z\w\3\r\5\c\2\6\d\t\f\0\2\k\e\a\w\k\p\7\h\q\3\f\a\a\w\b\4\u\3\c\k\0\7\6\q\x\w\o\t\o\8\f\x\s\5\n\d\j\s\i\c\c\1\n\2\a\e\d\h\p\x\c\w\u\i\e\k\7\6\t\o\2\o\d\g\i\l\a\x\q\y\8\l\l\c\i\l\3\s\4\k\w\z\l\3\6\j\o\8\j\e\z\e\7\p\5\c\a\l\s\c\b\7\9\d\t\2\o\c\l\6\e\h\8\p\v\0\1\3\g\e\b\8\f\6\d\l\1\w\g\m\4\2\z\u\t\t\z\h\h\5\h\n\i\4\8\d\4\i\b\9\5\a\a\n\o\u\a\q\2\n\9\3\5\1\w\l\g\a\3\9\o\b\a\z\0\u\m\k\a\f\t\2\3\b\r\c\o\o\5\s\m\a\j\e\6\z\t\4\u\b\n\g\6\1\i\k\v\z\6\m\b\q\a\l\o\r\i\9\q\a\1\k\e\t\2\m\5\s\a\u\h\r\i\d\l\b\v\o\4\x\f\v\2\j\d\7\9\5\d\x\z\p\o\r\0\h\3\g\j\s\u\x\t\n\u\r\z\r\l\8\q\g\r\w\a\a\i\v\v\e\7\h\s\2\6\k\q\5\1\j\r\n\z\g\l\2\p\f\p\t\j\q\0\4\x\d\1\z\h\m\c\j\n\n\0\h\3\y\m\a\t\t\z\7\f\x\y\4\u\0\q\u\8\f\r\q\c\1\s\q\4\w\o\b\u\v\1\7\n\9\a\y\q\4\q\g\h\b\u\s\f\f\4\m\s\6\9\9\w\0\e\w\0\v\z\2\u\u\h\w\8\0\m\l\1\1\u\5\s\1\t\x\q\d\f\5\x\q\8\n\4\4\f\9\f\9\6\v\r\b\y\6\t\g\e\k\f\r\e\2\1\a\e\8\n\z\u\q\a\d\z\u\p\g\b\s\9\a\g\v\r\6\8\r\k\7\w\m\t\s\o\o\g\9\1\x\g\f\x\x\m\a\h\q\9\2\x\q\9\i\0\6\m\x\e\3\m\d\x\5\o\a\8\7\k\r\c\g\2\t\c\2\m\4\r\4\x\n\k\q\6\m\d\b\q\m\a\w\8\s\a\u\k\t\5\6\6\e\d\3\j\5\9\a\s\x\7\9\y\h\p\e\n\1\p\4\g\4\n\c\k\j\e\f\g\7\q\8\x\b\4\b\6\s\j\g\7\9\4\3\n\t\d\x\f\7\q\i\2\p\f\6\u\g\e\h\f\n\z\x\c\2\8\4\j\z\p\d\8\6\v\t\r\b\k\3\y\8\0\g\m\l\w\y\n\k\9\2\w\0\n\m\p\9\w\b\s\d\5\u\6\2\u\5\x\i\2\d\r\9\p\u\a\c\w\h\6\k\f\b\p\7\3\n\n\3\5\6\n\i\t\a\o\h\j\r\y\t\i\x\0\o\s\d\3\x\5\r\w\z\u\r\3\i\j\x\8\m\5\4\4\n\2\h\f\f\w\t\3\e\x\6\1\x\v\u\i\q\l\6\y\7\d\f\3\c\d\o\z\p\f\v\9\6\y\q\t\w\8\r\h\w\i\g\a\q\8\p\f\y\y\4\0\z\z\g\w\v\k\0\s\w\j\4\6\9\m\g\i\t\m\v\z\f\5\w\g\u\e\k\w\a\m\f\6\o\d\5\9\d\r\p\6\x\2\3\0\p\8\w\b\y\b\u\s\o\7\7\m\2\i\d\q\9\2\t\9\i\9\y\0\z\z\a\w\x\2\0\5\l\r\z\1\b\9\z\r\k\w\y\c\4\l\h\r\1\q\z\4\7\x\s\2\x\u\l\7\c\a\7\a\e\r\v\u\8\z\c\r\c\k\y\6\9\f\q\e\u\c\h\f\b\g\7\a\8\u\t\r\l\b\6\p\s\l\6\r\1\6\c\k\m\j\n\g\p\f\j\r\y\p\6\d\n\y\o\6\w\j\8\d\v\7\u\4\a\k\i\y\7\l\h\e\r\9\4\t\t\a\t\5\6\4\i\4\u\a\j\f\u\3\i\j\4\o\g\b\9\q\6\b\4\a\u\k\e\z\c\s\v\m\n\f\j\u\h\c\y\7\v\e\e\5\a\v\t\g\h\6\o\6\k\z\x\1\4\g\7\4\j\e\f\h\h\m\r\u\l\9\7\v\7\p\1\v\i\p\y\m\r\m\g\3\6\a\c\7\v\8\x\9\a\0\k\3\y\a\q\z\6\k\f\z\b\9\r\6\m\g\2\s\7\t\a\l\c\j\u\z\0\i\5\5\6\6\v\x\q\t\1\9\4\8\x\h\u\l\f\i\b\v\o\7\d\8\j\0\f\m\r\2\o\z\l\m\a\1\q\1\e\g\t\3\0\7\s\w\2\3\o\x\e\h\c\9\x\h\b\q\e\i\t\q\8\x\s\q\0\j\a\l\a\n\3\z\e\r\h\9\4\w\4\c\e\p\4\9\b\1\s\o\o\2\y\e\8\e\4\i\t\4\e\g\j\r\1\g\p\a\w\x\9\2\f\8\9\4\k\d\w\5\q\7\9\6\i\u\o\w\q\v\e\o\8\a\y\a\o\l\j\4\7\n\r\d\m\8\c\b\z\h\r\g\d\i\q\x\h\j\x\w\m\r\i\l\c\b\5\e\z\c\a\k\1\i\r\7\y\r\0\5\l\b\7\0\a\c\b\s\a\z\i\b\a\z\f\v\7\9\x\y\j\p\u\g\l\l\1\w\x\b\s\5\n\p\b\u\5\t\h\y\e\g\p\x\2\e\u\u\v\i\h\t\i\0\y\f\6\l\n\h\n\n\l\4\e\0\9\3\f\i\1\x\h\p\m\a\t\h\s\c\4\8\k\z\0\8\h\l\i\y\2\f\8\y\o\5\g\u\r\j\h\0\0\m\f\8\0\z\r\9\h\t\6\1\i\i\d\r\7\q\0\t\r\u\m\8\5\1\j\d\j\g\x\l\c\3\t\k\c\n\b\i\b\v\w\3\6\h\0\h\l\6\w\w\j\r\6\7\o\1\2\e\9\z\1\w\x\2\f\p\s\v\h\9\w\5\l\4\f\k\t\6\g\r\3\j\u\o\k\7\b\g\l\9\g\1\m\e\1\v\s\j\g\i\c\w\m\z\t\z\k\x\x\0\g\n\g\n\z\y\3\f\x\1\w\6\m\s\e\2\s\f\d\1\w\s\x\8\z\g\b\x\2\w\4\t\r\a\u\7\l\5\b\4\w\f\r\w\z\j\4\y\0\0\v\x\w\1\7\9\x\e\r\t\f\p\5\2\m\k\h\8\8\0\p\8\w\c\p\1\2\e\i\o\v\z\p\f\2\f\j\6\k\0\9\j\s\w\5\3\r\2\3\s\v\v\j\f\h\i\s\r\g\w\o\d\1\u\z\l\6\h\w\t\h\e\9\x\3\n\k\0\t\3\i\s\m\z\u\l\e\3\y\z\r\m\6\m\1\b\5\9\8\z\s\7\j\s\d\k\p\6\x\8\6\d\l\h\p\q\t\6\s\w\f\x\r\2\x\h\m\l\p\3\k\x\b\3\7\d\x\w\4\9\j\a\e\0\2\c\8\c\z\2\r\o\s\d\t\n\9\y\y\s\6\e\z\a\c\k\8\5\o\6\u\2\r\6\x\2\4\g\u\4\8\z\8\7\e\q\f\y\5\6\b\b\t\9\v\2\9\6\f\t\p\5\8\c\f\3\4\3\w\w\v\9\i\3\h\o\2\s\p\l\h\2\l\v\f\y\a\k\o\q\s\c\l\3\i\a\7\9\0\d\3\k\t\p\l\o\1\v\a\g\p\0\g\p\g\y\2\d\h\m\r\2\i\z\l\m\e\x\s\3\i\n\b\4\w\h\a\k\t\4\u\q\1\n\0\6\9\p\z\b\9\z\4\q\u\h\1\8\8\v\g\p\i\q\n\v\4\z\8\1\0\d\0\h\c\0\2\2\q\d\v\p\g\i\i\5\r\v\w\m\v\l\d\x\9\a\l\p\6\6\x\d\c\i\i\x\t\k\h\a\8\a\l\q\8\7\y\c\n\1\3\1\b\n\m\b\h\o\0\y\a\j\1\3\x\o\j\1\b\z\y\5\q\4\9\8\3\m\g\2\w\y\l\p\o\f\u\y\c\0\9\d\a\b\l\i\7\l\m\5\n\m\4\e\t\o\a\9\u\i\w\f\i\i\6\f\e\1\6\6\8\o\l\h\h\w\v\t\3\3\i\a\5\r\t\o\i\d\x\k\e\g\b\e\6\n\e\g\u\s\f\q\8\3\u\1\w\m\8\b\5\m\4\n\y\u\k\z\i\r\0\j\p\e\o\d\a\i\m\3\1\d\6\m\5\g\g\a\d\p\v\x\b\d\y\f\z\o\n\e\n\9\y\i\c\n\e\h\8\j\i\i\k\e\s\v\5\u\e\e\s\9\t\4\9\f\8\t\3\i\g\u\o\v\h\l\s\y\x\x\x\k\m\v\5\x\4\l\w\8\b\j\w\m\x\p\1\v\j\b\y\x\i\b\1\r\k\8\q\d\5\r\i\l\k\0\4\k\f\u\4\q\p\s\0\g\q\z\o\n\m\w\w\z\j\l\l\4\t\v\v\s\i\p\m\u\1\u\0\k\d\4\w\2\w\t\3\x\n\b\e\l\f\w\g\v\3\m\v\u\p\q\l\u\l\a\z\p\t\0\2\4\k\p\b\y\u\g\6\y\y\j\c\t\c\d\0\b\t\k\6\b\6\x\s\o\m ]] 00:40:24.702 ************************************ 00:40:24.702 END TEST dd_rw_offset 00:40:24.702 ************************************ 00:40:24.702 00:40:24.702 real 0m3.663s 00:40:24.702 user 0m2.945s 00:40:24.702 sys 0m0.589s 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:40:24.702 11:51:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:24.702 [2024-07-13 11:51:59.226350] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:24.702 [2024-07-13 11:51:59.226551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167623 ] 00:40:24.702 { 00:40:24.702 "subsystems": [ 00:40:24.702 { 00:40:24.702 "subsystem": "bdev", 00:40:24.702 "config": [ 00:40:24.702 { 00:40:24.702 "params": { 00:40:24.702 "trtype": "pcie", 00:40:24.702 "traddr": "0000:00:10.0", 00:40:24.702 "name": "Nvme0" 00:40:24.702 }, 00:40:24.702 "method": "bdev_nvme_attach_controller" 00:40:24.702 }, 00:40:24.702 { 00:40:24.702 "method": "bdev_wait_for_examine" 00:40:24.702 } 00:40:24.702 ] 00:40:24.702 } 00:40:24.702 ] 00:40:24.702 } 00:40:24.702 [2024-07-13 11:51:59.395802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:24.962 [2024-07-13 11:51:59.599777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.156  Copying: 1024/1024 [kB] (average 1000 MBps) 00:40:26.156 00:40:26.415 11:52:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:26.415 00:40:26.415 real 0m42.752s 00:40:26.415 user 0m34.428s 00:40:26.415 sys 0m6.817s 00:40:26.415 11:52:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:26.415 11:52:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:40:26.415 ************************************ 00:40:26.415 END TEST spdk_dd_basic_rw 00:40:26.415 ************************************ 00:40:26.415 11:52:00 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:40:26.415 11:52:00 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:40:26.415 11:52:00 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:26.415 11:52:00 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:26.415 11:52:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:40:26.415 ************************************ 00:40:26.415 START TEST spdk_dd_posix 00:40:26.415 ************************************ 00:40:26.415 11:52:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:40:26.415 * Looking for test storage... 00:40:26.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:40:26.415 * First test run, using AIO 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:26.415 ************************************ 00:40:26.415 START TEST dd_flag_append 00:40:26.415 ************************************ 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:40:26.415 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=8wu3wfmcszyh25rfsoj01tesmsaj02h3 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=cbreosmufsw3gw2kekdxujjt0u47jf5r 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 8wu3wfmcszyh25rfsoj01tesmsaj02h3 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s cbreosmufsw3gw2kekdxujjt0u47jf5r 00:40:26.416 11:52:01 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:40:26.416 [2024-07-13 11:52:01.138277] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:26.416 [2024-07-13 11:52:01.139236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167709 ] 00:40:26.675 [2024-07-13 11:52:01.314357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.933 [2024-07-13 11:52:01.499047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.127  Copying: 32/32 [B] (average 31 kBps) 00:40:28.127 00:40:28.127 ************************************ 00:40:28.127 END TEST dd_flag_append 00:40:28.127 ************************************ 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ cbreosmufsw3gw2kekdxujjt0u47jf5r8wu3wfmcszyh25rfsoj01tesmsaj02h3 == \c\b\r\e\o\s\m\u\f\s\w\3\g\w\2\k\e\k\d\x\u\j\j\t\0\u\4\7\j\f\5\r\8\w\u\3\w\f\m\c\s\z\y\h\2\5\r\f\s\o\j\0\1\t\e\s\m\s\a\j\0\2\h\3 ]] 00:40:28.127 00:40:28.127 real 0m1.757s 00:40:28.127 user 0m1.357s 00:40:28.127 sys 0m0.258s 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:28.127 ************************************ 00:40:28.127 START TEST dd_flag_directory 00:40:28.127 ************************************ 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:28.127 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:28.385 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:28.385 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:28.385 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:28.385 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:28.385 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:28.385 11:52:02 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:28.385 [2024-07-13 11:52:02.947814] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:28.385 [2024-07-13 11:52:02.948052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167757 ] 00:40:28.385 [2024-07-13 11:52:03.116983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:28.643 [2024-07-13 11:52:03.297088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:28.902 [2024-07-13 11:52:03.578251] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:28.902 [2024-07-13 11:52:03.578686] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:28.902 [2024-07-13 11:52:03.578826] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:29.469 [2024-07-13 11:52:04.207408] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:30.038 11:52:04 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:30.038 [2024-07-13 11:52:04.634897] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:30.038 [2024-07-13 11:52:04.635118] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167777 ] 00:40:30.297 [2024-07-13 11:52:04.799587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:30.297 [2024-07-13 11:52:04.990317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:30.556 [2024-07-13 11:52:05.272410] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:30.556 [2024-07-13 11:52:05.272797] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:30.556 [2024-07-13 11:52:05.272865] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:31.493 [2024-07-13 11:52:05.903134] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:31.752 ************************************ 00:40:31.752 END TEST dd_flag_directory 00:40:31.752 ************************************ 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:31.752 00:40:31.752 real 0m3.392s 00:40:31.752 user 0m2.631s 00:40:31.752 sys 0m0.557s 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:31.752 ************************************ 00:40:31.752 START TEST dd_flag_nofollow 00:40:31.752 ************************************ 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:31.752 11:52:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:31.752 [2024-07-13 11:52:06.404155] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:31.752 [2024-07-13 11:52:06.405089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167822 ] 00:40:32.011 [2024-07-13 11:52:06.575918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:32.270 [2024-07-13 11:52:06.772710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:32.529 [2024-07-13 11:52:07.053706] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:32.529 [2024-07-13 11:52:07.054119] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:40:32.529 [2024-07-13 11:52:07.054189] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:33.096 [2024-07-13 11:52:07.686190] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:33.355 11:52:08 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:40:33.615 [2024-07-13 11:52:08.125333] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:33.615 [2024-07-13 11:52:08.125755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167867 ] 00:40:33.615 [2024-07-13 11:52:08.296525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:33.874 [2024-07-13 11:52:08.489616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:34.133 [2024-07-13 11:52:08.771801] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:34.133 [2024-07-13 11:52:08.772195] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:40:34.133 [2024-07-13 11:52:08.772320] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:34.700 [2024-07-13 11:52:09.399660] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:40:35.266 11:52:09 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:35.266 [2024-07-13 11:52:09.842223] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:35.266 [2024-07-13 11:52:09.843306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167889 ] 00:40:35.266 [2024-07-13 11:52:10.008295] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:35.525 [2024-07-13 11:52:10.197218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.160  Copying: 512/512 [B] (average 500 kBps) 00:40:37.160 00:40:37.160 ************************************ 00:40:37.160 END TEST dd_flag_nofollow 00:40:37.160 ************************************ 00:40:37.160 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ a8b56voxa790nf944qe8rdp46t283ripec1jnhe5cw84lmt3r290ln8oaf2dgv06jf9c5byrh4mo9b02516lb2cargw53wnqq5kf6qxjs6etppsk5gzdwlil5dl04jy9gbd8ke61p50j65lgih6v17nsgu0w1lcmu9fauna3br5l3u8fgbipjp6i8w28vqf93meej2pu7uvdxouykb5r6ez8gu84xzthxbtw3nmg58jnw3tjlgmx74ca1fs3rfhvtmsaiviq6g3xs2n13gjhc3et5keoi9q88lh1tlsohw3roypt3hj6xp2q4tm5pfvg43k2bf1rqdubzuxs35m3g32midwi1zh13lrk5qpuycc5n8kdni433ilbk2doewt0q5af4a6xf5jr6mvon400n6oxbar4ssftgo65yvlhdl7hn6cj0vi0v8i96mey9leb4pvtdishnvnd0uwpfnecq9ikd0tnhtfbng7vd7mgy6n6n30g5zo8wavs5wa0gayl == \a\8\b\5\6\v\o\x\a\7\9\0\n\f\9\4\4\q\e\8\r\d\p\4\6\t\2\8\3\r\i\p\e\c\1\j\n\h\e\5\c\w\8\4\l\m\t\3\r\2\9\0\l\n\8\o\a\f\2\d\g\v\0\6\j\f\9\c\5\b\y\r\h\4\m\o\9\b\0\2\5\1\6\l\b\2\c\a\r\g\w\5\3\w\n\q\q\5\k\f\6\q\x\j\s\6\e\t\p\p\s\k\5\g\z\d\w\l\i\l\5\d\l\0\4\j\y\9\g\b\d\8\k\e\6\1\p\5\0\j\6\5\l\g\i\h\6\v\1\7\n\s\g\u\0\w\1\l\c\m\u\9\f\a\u\n\a\3\b\r\5\l\3\u\8\f\g\b\i\p\j\p\6\i\8\w\2\8\v\q\f\9\3\m\e\e\j\2\p\u\7\u\v\d\x\o\u\y\k\b\5\r\6\e\z\8\g\u\8\4\x\z\t\h\x\b\t\w\3\n\m\g\5\8\j\n\w\3\t\j\l\g\m\x\7\4\c\a\1\f\s\3\r\f\h\v\t\m\s\a\i\v\i\q\6\g\3\x\s\2\n\1\3\g\j\h\c\3\e\t\5\k\e\o\i\9\q\8\8\l\h\1\t\l\s\o\h\w\3\r\o\y\p\t\3\h\j\6\x\p\2\q\4\t\m\5\p\f\v\g\4\3\k\2\b\f\1\r\q\d\u\b\z\u\x\s\3\5\m\3\g\3\2\m\i\d\w\i\1\z\h\1\3\l\r\k\5\q\p\u\y\c\c\5\n\8\k\d\n\i\4\3\3\i\l\b\k\2\d\o\e\w\t\0\q\5\a\f\4\a\6\x\f\5\j\r\6\m\v\o\n\4\0\0\n\6\o\x\b\a\r\4\s\s\f\t\g\o\6\5\y\v\l\h\d\l\7\h\n\6\c\j\0\v\i\0\v\8\i\9\6\m\e\y\9\l\e\b\4\p\v\t\d\i\s\h\n\v\n\d\0\u\w\p\f\n\e\c\q\9\i\k\d\0\t\n\h\t\f\b\n\g\7\v\d\7\m\g\y\6\n\6\n\3\0\g\5\z\o\8\w\a\v\s\5\w\a\0\g\a\y\l ]] 00:40:37.160 00:40:37.160 real 0m5.190s 00:40:37.160 user 0m3.986s 00:40:37.160 sys 0m0.859s 00:40:37.160 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:37.161 ************************************ 00:40:37.161 START TEST dd_flag_noatime 00:40:37.161 ************************************ 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720871530 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720871531 00:40:37.161 11:52:11 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:40:38.096 11:52:12 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:38.096 [2024-07-13 11:52:12.667069] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:38.096 [2024-07-13 11:52:12.668193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167946 ] 00:40:38.096 [2024-07-13 11:52:12.837169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:38.354 [2024-07-13 11:52:13.032851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:40.025  Copying: 512/512 [B] (average 500 kBps) 00:40:40.026 00:40:40.026 11:52:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:40.026 11:52:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720871530 )) 00:40:40.026 11:52:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:40.026 11:52:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720871531 )) 00:40:40.026 11:52:14 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:40:40.026 [2024-07-13 11:52:14.421775] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:40.026 [2024-07-13 11:52:14.422792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167972 ] 00:40:40.026 [2024-07-13 11:52:14.592426] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:40.026 [2024-07-13 11:52:14.772668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.525  Copying: 512/512 [B] (average 500 kBps) 00:40:41.525 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:41.525 ************************************ 00:40:41.525 END TEST dd_flag_noatime 00:40:41.525 ************************************ 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720871535 )) 00:40:41.525 00:40:41.525 real 0m4.523s 00:40:41.525 user 0m2.689s 00:40:41.525 sys 0m0.569s 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:41.525 ************************************ 00:40:41.525 START TEST dd_flags_misc 00:40:41.525 ************************************ 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:41.525 11:52:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:41.525 [2024-07-13 11:52:16.222125] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:41.525 [2024-07-13 11:52:16.222553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168021 ] 00:40:41.783 [2024-07-13 11:52:16.392022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:42.042 [2024-07-13 11:52:16.571579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:43.236  Copying: 512/512 [B] (average 250 kBps) 00:40:43.236 00:40:43.236 11:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4a3ux7n2vhx99wy40o1neyxd0zdban3dzkcjmykju6749oztwl8du7tiadoc19uxh3luggyh6r8zyv0ympbpat9qmkdf2mq1bxa8n13a7hwstjivwh3vkd5ok6lyjc5sckda0996kpyz3zssqeqcf0hfh1jbg04kuf6rvvi3h4qwh6ibh4ptij8rpppq6fzd0655yz6g7e0glyqiak1axsxsbutaydfj8ikbrzw4qtuuzfy8zpjmrii83bg332uev9us20stt9ym753i4sebtjnsolvadfl75qcjwovgrvitdjeb9a6o1182j4xnnobr1cpuisc1ph1j6rfft36tjwl4ifsz43zekyot4acg40d62ktgma8w31b12jq7k4gi3yat9jnf9sfdj8x61g14tzw3nhhz46eyqcr90ae3uockmad161822osd3amcjfn6hyahq69qgnriatxp5gd7nl27yvofftv6wtrq9coreflem1nm7a5f8yj59252ar1x == \4\a\3\u\x\7\n\2\v\h\x\9\9\w\y\4\0\o\1\n\e\y\x\d\0\z\d\b\a\n\3\d\z\k\c\j\m\y\k\j\u\6\7\4\9\o\z\t\w\l\8\d\u\7\t\i\a\d\o\c\1\9\u\x\h\3\l\u\g\g\y\h\6\r\8\z\y\v\0\y\m\p\b\p\a\t\9\q\m\k\d\f\2\m\q\1\b\x\a\8\n\1\3\a\7\h\w\s\t\j\i\v\w\h\3\v\k\d\5\o\k\6\l\y\j\c\5\s\c\k\d\a\0\9\9\6\k\p\y\z\3\z\s\s\q\e\q\c\f\0\h\f\h\1\j\b\g\0\4\k\u\f\6\r\v\v\i\3\h\4\q\w\h\6\i\b\h\4\p\t\i\j\8\r\p\p\p\q\6\f\z\d\0\6\5\5\y\z\6\g\7\e\0\g\l\y\q\i\a\k\1\a\x\s\x\s\b\u\t\a\y\d\f\j\8\i\k\b\r\z\w\4\q\t\u\u\z\f\y\8\z\p\j\m\r\i\i\8\3\b\g\3\3\2\u\e\v\9\u\s\2\0\s\t\t\9\y\m\7\5\3\i\4\s\e\b\t\j\n\s\o\l\v\a\d\f\l\7\5\q\c\j\w\o\v\g\r\v\i\t\d\j\e\b\9\a\6\o\1\1\8\2\j\4\x\n\n\o\b\r\1\c\p\u\i\s\c\1\p\h\1\j\6\r\f\f\t\3\6\t\j\w\l\4\i\f\s\z\4\3\z\e\k\y\o\t\4\a\c\g\4\0\d\6\2\k\t\g\m\a\8\w\3\1\b\1\2\j\q\7\k\4\g\i\3\y\a\t\9\j\n\f\9\s\f\d\j\8\x\6\1\g\1\4\t\z\w\3\n\h\h\z\4\6\e\y\q\c\r\9\0\a\e\3\u\o\c\k\m\a\d\1\6\1\8\2\2\o\s\d\3\a\m\c\j\f\n\6\h\y\a\h\q\6\9\q\g\n\r\i\a\t\x\p\5\g\d\7\n\l\2\7\y\v\o\f\f\t\v\6\w\t\r\q\9\c\o\r\e\f\l\e\m\1\n\m\7\a\5\f\8\y\j\5\9\2\5\2\a\r\1\x ]] 00:40:43.236 11:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:43.236 11:52:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:43.236 [2024-07-13 11:52:17.972162] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:43.236 [2024-07-13 11:52:17.972534] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168064 ] 00:40:43.494 [2024-07-13 11:52:18.140448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:43.752 [2024-07-13 11:52:18.329752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:44.974  Copying: 512/512 [B] (average 500 kBps) 00:40:44.974 00:40:44.974 11:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4a3ux7n2vhx99wy40o1neyxd0zdban3dzkcjmykju6749oztwl8du7tiadoc19uxh3luggyh6r8zyv0ympbpat9qmkdf2mq1bxa8n13a7hwstjivwh3vkd5ok6lyjc5sckda0996kpyz3zssqeqcf0hfh1jbg04kuf6rvvi3h4qwh6ibh4ptij8rpppq6fzd0655yz6g7e0glyqiak1axsxsbutaydfj8ikbrzw4qtuuzfy8zpjmrii83bg332uev9us20stt9ym753i4sebtjnsolvadfl75qcjwovgrvitdjeb9a6o1182j4xnnobr1cpuisc1ph1j6rfft36tjwl4ifsz43zekyot4acg40d62ktgma8w31b12jq7k4gi3yat9jnf9sfdj8x61g14tzw3nhhz46eyqcr90ae3uockmad161822osd3amcjfn6hyahq69qgnriatxp5gd7nl27yvofftv6wtrq9coreflem1nm7a5f8yj59252ar1x == \4\a\3\u\x\7\n\2\v\h\x\9\9\w\y\4\0\o\1\n\e\y\x\d\0\z\d\b\a\n\3\d\z\k\c\j\m\y\k\j\u\6\7\4\9\o\z\t\w\l\8\d\u\7\t\i\a\d\o\c\1\9\u\x\h\3\l\u\g\g\y\h\6\r\8\z\y\v\0\y\m\p\b\p\a\t\9\q\m\k\d\f\2\m\q\1\b\x\a\8\n\1\3\a\7\h\w\s\t\j\i\v\w\h\3\v\k\d\5\o\k\6\l\y\j\c\5\s\c\k\d\a\0\9\9\6\k\p\y\z\3\z\s\s\q\e\q\c\f\0\h\f\h\1\j\b\g\0\4\k\u\f\6\r\v\v\i\3\h\4\q\w\h\6\i\b\h\4\p\t\i\j\8\r\p\p\p\q\6\f\z\d\0\6\5\5\y\z\6\g\7\e\0\g\l\y\q\i\a\k\1\a\x\s\x\s\b\u\t\a\y\d\f\j\8\i\k\b\r\z\w\4\q\t\u\u\z\f\y\8\z\p\j\m\r\i\i\8\3\b\g\3\3\2\u\e\v\9\u\s\2\0\s\t\t\9\y\m\7\5\3\i\4\s\e\b\t\j\n\s\o\l\v\a\d\f\l\7\5\q\c\j\w\o\v\g\r\v\i\t\d\j\e\b\9\a\6\o\1\1\8\2\j\4\x\n\n\o\b\r\1\c\p\u\i\s\c\1\p\h\1\j\6\r\f\f\t\3\6\t\j\w\l\4\i\f\s\z\4\3\z\e\k\y\o\t\4\a\c\g\4\0\d\6\2\k\t\g\m\a\8\w\3\1\b\1\2\j\q\7\k\4\g\i\3\y\a\t\9\j\n\f\9\s\f\d\j\8\x\6\1\g\1\4\t\z\w\3\n\h\h\z\4\6\e\y\q\c\r\9\0\a\e\3\u\o\c\k\m\a\d\1\6\1\8\2\2\o\s\d\3\a\m\c\j\f\n\6\h\y\a\h\q\6\9\q\g\n\r\i\a\t\x\p\5\g\d\7\n\l\2\7\y\v\o\f\f\t\v\6\w\t\r\q\9\c\o\r\e\f\l\e\m\1\n\m\7\a\5\f\8\y\j\5\9\2\5\2\a\r\1\x ]] 00:40:44.974 11:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:44.974 11:52:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:44.974 [2024-07-13 11:52:19.711132] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:44.974 [2024-07-13 11:52:19.711608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168082 ] 00:40:45.234 [2024-07-13 11:52:19.881526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:45.493 [2024-07-13 11:52:20.075118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:46.684  Copying: 512/512 [B] (average 250 kBps) 00:40:46.684 00:40:46.685 11:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4a3ux7n2vhx99wy40o1neyxd0zdban3dzkcjmykju6749oztwl8du7tiadoc19uxh3luggyh6r8zyv0ympbpat9qmkdf2mq1bxa8n13a7hwstjivwh3vkd5ok6lyjc5sckda0996kpyz3zssqeqcf0hfh1jbg04kuf6rvvi3h4qwh6ibh4ptij8rpppq6fzd0655yz6g7e0glyqiak1axsxsbutaydfj8ikbrzw4qtuuzfy8zpjmrii83bg332uev9us20stt9ym753i4sebtjnsolvadfl75qcjwovgrvitdjeb9a6o1182j4xnnobr1cpuisc1ph1j6rfft36tjwl4ifsz43zekyot4acg40d62ktgma8w31b12jq7k4gi3yat9jnf9sfdj8x61g14tzw3nhhz46eyqcr90ae3uockmad161822osd3amcjfn6hyahq69qgnriatxp5gd7nl27yvofftv6wtrq9coreflem1nm7a5f8yj59252ar1x == \4\a\3\u\x\7\n\2\v\h\x\9\9\w\y\4\0\o\1\n\e\y\x\d\0\z\d\b\a\n\3\d\z\k\c\j\m\y\k\j\u\6\7\4\9\o\z\t\w\l\8\d\u\7\t\i\a\d\o\c\1\9\u\x\h\3\l\u\g\g\y\h\6\r\8\z\y\v\0\y\m\p\b\p\a\t\9\q\m\k\d\f\2\m\q\1\b\x\a\8\n\1\3\a\7\h\w\s\t\j\i\v\w\h\3\v\k\d\5\o\k\6\l\y\j\c\5\s\c\k\d\a\0\9\9\6\k\p\y\z\3\z\s\s\q\e\q\c\f\0\h\f\h\1\j\b\g\0\4\k\u\f\6\r\v\v\i\3\h\4\q\w\h\6\i\b\h\4\p\t\i\j\8\r\p\p\p\q\6\f\z\d\0\6\5\5\y\z\6\g\7\e\0\g\l\y\q\i\a\k\1\a\x\s\x\s\b\u\t\a\y\d\f\j\8\i\k\b\r\z\w\4\q\t\u\u\z\f\y\8\z\p\j\m\r\i\i\8\3\b\g\3\3\2\u\e\v\9\u\s\2\0\s\t\t\9\y\m\7\5\3\i\4\s\e\b\t\j\n\s\o\l\v\a\d\f\l\7\5\q\c\j\w\o\v\g\r\v\i\t\d\j\e\b\9\a\6\o\1\1\8\2\j\4\x\n\n\o\b\r\1\c\p\u\i\s\c\1\p\h\1\j\6\r\f\f\t\3\6\t\j\w\l\4\i\f\s\z\4\3\z\e\k\y\o\t\4\a\c\g\4\0\d\6\2\k\t\g\m\a\8\w\3\1\b\1\2\j\q\7\k\4\g\i\3\y\a\t\9\j\n\f\9\s\f\d\j\8\x\6\1\g\1\4\t\z\w\3\n\h\h\z\4\6\e\y\q\c\r\9\0\a\e\3\u\o\c\k\m\a\d\1\6\1\8\2\2\o\s\d\3\a\m\c\j\f\n\6\h\y\a\h\q\6\9\q\g\n\r\i\a\t\x\p\5\g\d\7\n\l\2\7\y\v\o\f\f\t\v\6\w\t\r\q\9\c\o\r\e\f\l\e\m\1\n\m\7\a\5\f\8\y\j\5\9\2\5\2\a\r\1\x ]] 00:40:46.685 11:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:46.685 11:52:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:46.944 [2024-07-13 11:52:21.462625] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:46.944 [2024-07-13 11:52:21.463111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168106 ] 00:40:46.944 [2024-07-13 11:52:21.635895] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:47.204 [2024-07-13 11:52:21.829414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.401  Copying: 512/512 [B] (average 166 kBps) 00:40:48.401 00:40:48.401 11:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4a3ux7n2vhx99wy40o1neyxd0zdban3dzkcjmykju6749oztwl8du7tiadoc19uxh3luggyh6r8zyv0ympbpat9qmkdf2mq1bxa8n13a7hwstjivwh3vkd5ok6lyjc5sckda0996kpyz3zssqeqcf0hfh1jbg04kuf6rvvi3h4qwh6ibh4ptij8rpppq6fzd0655yz6g7e0glyqiak1axsxsbutaydfj8ikbrzw4qtuuzfy8zpjmrii83bg332uev9us20stt9ym753i4sebtjnsolvadfl75qcjwovgrvitdjeb9a6o1182j4xnnobr1cpuisc1ph1j6rfft36tjwl4ifsz43zekyot4acg40d62ktgma8w31b12jq7k4gi3yat9jnf9sfdj8x61g14tzw3nhhz46eyqcr90ae3uockmad161822osd3amcjfn6hyahq69qgnriatxp5gd7nl27yvofftv6wtrq9coreflem1nm7a5f8yj59252ar1x == \4\a\3\u\x\7\n\2\v\h\x\9\9\w\y\4\0\o\1\n\e\y\x\d\0\z\d\b\a\n\3\d\z\k\c\j\m\y\k\j\u\6\7\4\9\o\z\t\w\l\8\d\u\7\t\i\a\d\o\c\1\9\u\x\h\3\l\u\g\g\y\h\6\r\8\z\y\v\0\y\m\p\b\p\a\t\9\q\m\k\d\f\2\m\q\1\b\x\a\8\n\1\3\a\7\h\w\s\t\j\i\v\w\h\3\v\k\d\5\o\k\6\l\y\j\c\5\s\c\k\d\a\0\9\9\6\k\p\y\z\3\z\s\s\q\e\q\c\f\0\h\f\h\1\j\b\g\0\4\k\u\f\6\r\v\v\i\3\h\4\q\w\h\6\i\b\h\4\p\t\i\j\8\r\p\p\p\q\6\f\z\d\0\6\5\5\y\z\6\g\7\e\0\g\l\y\q\i\a\k\1\a\x\s\x\s\b\u\t\a\y\d\f\j\8\i\k\b\r\z\w\4\q\t\u\u\z\f\y\8\z\p\j\m\r\i\i\8\3\b\g\3\3\2\u\e\v\9\u\s\2\0\s\t\t\9\y\m\7\5\3\i\4\s\e\b\t\j\n\s\o\l\v\a\d\f\l\7\5\q\c\j\w\o\v\g\r\v\i\t\d\j\e\b\9\a\6\o\1\1\8\2\j\4\x\n\n\o\b\r\1\c\p\u\i\s\c\1\p\h\1\j\6\r\f\f\t\3\6\t\j\w\l\4\i\f\s\z\4\3\z\e\k\y\o\t\4\a\c\g\4\0\d\6\2\k\t\g\m\a\8\w\3\1\b\1\2\j\q\7\k\4\g\i\3\y\a\t\9\j\n\f\9\s\f\d\j\8\x\6\1\g\1\4\t\z\w\3\n\h\h\z\4\6\e\y\q\c\r\9\0\a\e\3\u\o\c\k\m\a\d\1\6\1\8\2\2\o\s\d\3\a\m\c\j\f\n\6\h\y\a\h\q\6\9\q\g\n\r\i\a\t\x\p\5\g\d\7\n\l\2\7\y\v\o\f\f\t\v\6\w\t\r\q\9\c\o\r\e\f\l\e\m\1\n\m\7\a\5\f\8\y\j\5\9\2\5\2\a\r\1\x ]] 00:40:48.401 11:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:40:48.401 11:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:40:48.401 11:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:40:48.401 11:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:40:48.666 11:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:48.666 11:52:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:40:48.666 [2024-07-13 11:52:23.220609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:48.666 [2024-07-13 11:52:23.221635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168130 ] 00:40:48.666 [2024-07-13 11:52:23.389363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.937 [2024-07-13 11:52:23.592866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.138  Copying: 512/512 [B] (average 500 kBps) 00:40:50.138 00:40:50.398 11:52:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ z4cbrz9tus77h5i8h6q4489ipql9t1e29etfixyw5fy1ld8zqzc1736udon4i5i91d4yz70oq9bjlq4icyizpm6hs8ue818nw70yi9uaifegpph4hd1j9yu6f7etzowom0mrke1g1nyjq2nfra2faa97dx5ohutbqjgkrlcdw0ed1uwcmkxbovpu0e2bn03sajecf4oq80db2nukug5c9609g3yrv85jok5lx6rwv6uksrjwnt35jhoqbglpuk1u3d89ksg3pxmlcuizqzc8c0l88ljl3ehkk5ekw03tco1ot012hcz1txwxgp0h5q7lnx241qap85v9ykchktrl6dhi3gmy1g9p37vkgong8suxhbsisn8amd4hlpiiytbuwb0ktt61mzg2v3wm9uss32c9i5qtkv52902psjk30ttu4blhs4t5zm6tq6hmy2vj6tgdkmpm7pv731a7uk1g4n7z90bk780onb6yv029z4a5j3n12ixc9etk764mnxz8 == \z\4\c\b\r\z\9\t\u\s\7\7\h\5\i\8\h\6\q\4\4\8\9\i\p\q\l\9\t\1\e\2\9\e\t\f\i\x\y\w\5\f\y\1\l\d\8\z\q\z\c\1\7\3\6\u\d\o\n\4\i\5\i\9\1\d\4\y\z\7\0\o\q\9\b\j\l\q\4\i\c\y\i\z\p\m\6\h\s\8\u\e\8\1\8\n\w\7\0\y\i\9\u\a\i\f\e\g\p\p\h\4\h\d\1\j\9\y\u\6\f\7\e\t\z\o\w\o\m\0\m\r\k\e\1\g\1\n\y\j\q\2\n\f\r\a\2\f\a\a\9\7\d\x\5\o\h\u\t\b\q\j\g\k\r\l\c\d\w\0\e\d\1\u\w\c\m\k\x\b\o\v\p\u\0\e\2\b\n\0\3\s\a\j\e\c\f\4\o\q\8\0\d\b\2\n\u\k\u\g\5\c\9\6\0\9\g\3\y\r\v\8\5\j\o\k\5\l\x\6\r\w\v\6\u\k\s\r\j\w\n\t\3\5\j\h\o\q\b\g\l\p\u\k\1\u\3\d\8\9\k\s\g\3\p\x\m\l\c\u\i\z\q\z\c\8\c\0\l\8\8\l\j\l\3\e\h\k\k\5\e\k\w\0\3\t\c\o\1\o\t\0\1\2\h\c\z\1\t\x\w\x\g\p\0\h\5\q\7\l\n\x\2\4\1\q\a\p\8\5\v\9\y\k\c\h\k\t\r\l\6\d\h\i\3\g\m\y\1\g\9\p\3\7\v\k\g\o\n\g\8\s\u\x\h\b\s\i\s\n\8\a\m\d\4\h\l\p\i\i\y\t\b\u\w\b\0\k\t\t\6\1\m\z\g\2\v\3\w\m\9\u\s\s\3\2\c\9\i\5\q\t\k\v\5\2\9\0\2\p\s\j\k\3\0\t\t\u\4\b\l\h\s\4\t\5\z\m\6\t\q\6\h\m\y\2\v\j\6\t\g\d\k\m\p\m\7\p\v\7\3\1\a\7\u\k\1\g\4\n\7\z\9\0\b\k\7\8\0\o\n\b\6\y\v\0\2\9\z\4\a\5\j\3\n\1\2\i\x\c\9\e\t\k\7\6\4\m\n\x\z\8 ]] 00:40:50.398 11:52:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:50.398 11:52:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:40:50.398 [2024-07-13 11:52:24.945201] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:50.398 [2024-07-13 11:52:24.945612] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168147 ] 00:40:50.398 [2024-07-13 11:52:25.100647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:50.656 [2024-07-13 11:52:25.279529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:51.849  Copying: 512/512 [B] (average 500 kBps) 00:40:51.849 00:40:52.106 11:52:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ z4cbrz9tus77h5i8h6q4489ipql9t1e29etfixyw5fy1ld8zqzc1736udon4i5i91d4yz70oq9bjlq4icyizpm6hs8ue818nw70yi9uaifegpph4hd1j9yu6f7etzowom0mrke1g1nyjq2nfra2faa97dx5ohutbqjgkrlcdw0ed1uwcmkxbovpu0e2bn03sajecf4oq80db2nukug5c9609g3yrv85jok5lx6rwv6uksrjwnt35jhoqbglpuk1u3d89ksg3pxmlcuizqzc8c0l88ljl3ehkk5ekw03tco1ot012hcz1txwxgp0h5q7lnx241qap85v9ykchktrl6dhi3gmy1g9p37vkgong8suxhbsisn8amd4hlpiiytbuwb0ktt61mzg2v3wm9uss32c9i5qtkv52902psjk30ttu4blhs4t5zm6tq6hmy2vj6tgdkmpm7pv731a7uk1g4n7z90bk780onb6yv029z4a5j3n12ixc9etk764mnxz8 == \z\4\c\b\r\z\9\t\u\s\7\7\h\5\i\8\h\6\q\4\4\8\9\i\p\q\l\9\t\1\e\2\9\e\t\f\i\x\y\w\5\f\y\1\l\d\8\z\q\z\c\1\7\3\6\u\d\o\n\4\i\5\i\9\1\d\4\y\z\7\0\o\q\9\b\j\l\q\4\i\c\y\i\z\p\m\6\h\s\8\u\e\8\1\8\n\w\7\0\y\i\9\u\a\i\f\e\g\p\p\h\4\h\d\1\j\9\y\u\6\f\7\e\t\z\o\w\o\m\0\m\r\k\e\1\g\1\n\y\j\q\2\n\f\r\a\2\f\a\a\9\7\d\x\5\o\h\u\t\b\q\j\g\k\r\l\c\d\w\0\e\d\1\u\w\c\m\k\x\b\o\v\p\u\0\e\2\b\n\0\3\s\a\j\e\c\f\4\o\q\8\0\d\b\2\n\u\k\u\g\5\c\9\6\0\9\g\3\y\r\v\8\5\j\o\k\5\l\x\6\r\w\v\6\u\k\s\r\j\w\n\t\3\5\j\h\o\q\b\g\l\p\u\k\1\u\3\d\8\9\k\s\g\3\p\x\m\l\c\u\i\z\q\z\c\8\c\0\l\8\8\l\j\l\3\e\h\k\k\5\e\k\w\0\3\t\c\o\1\o\t\0\1\2\h\c\z\1\t\x\w\x\g\p\0\h\5\q\7\l\n\x\2\4\1\q\a\p\8\5\v\9\y\k\c\h\k\t\r\l\6\d\h\i\3\g\m\y\1\g\9\p\3\7\v\k\g\o\n\g\8\s\u\x\h\b\s\i\s\n\8\a\m\d\4\h\l\p\i\i\y\t\b\u\w\b\0\k\t\t\6\1\m\z\g\2\v\3\w\m\9\u\s\s\3\2\c\9\i\5\q\t\k\v\5\2\9\0\2\p\s\j\k\3\0\t\t\u\4\b\l\h\s\4\t\5\z\m\6\t\q\6\h\m\y\2\v\j\6\t\g\d\k\m\p\m\7\p\v\7\3\1\a\7\u\k\1\g\4\n\7\z\9\0\b\k\7\8\0\o\n\b\6\y\v\0\2\9\z\4\a\5\j\3\n\1\2\i\x\c\9\e\t\k\7\6\4\m\n\x\z\8 ]] 00:40:52.106 11:52:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:52.106 11:52:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:40:52.106 [2024-07-13 11:52:26.680683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:52.106 [2024-07-13 11:52:26.681119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168175 ] 00:40:52.106 [2024-07-13 11:52:26.851550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:52.364 [2024-07-13 11:52:27.036347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:53.998  Copying: 512/512 [B] (average 166 kBps) 00:40:53.998 00:40:53.998 11:52:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ z4cbrz9tus77h5i8h6q4489ipql9t1e29etfixyw5fy1ld8zqzc1736udon4i5i91d4yz70oq9bjlq4icyizpm6hs8ue818nw70yi9uaifegpph4hd1j9yu6f7etzowom0mrke1g1nyjq2nfra2faa97dx5ohutbqjgkrlcdw0ed1uwcmkxbovpu0e2bn03sajecf4oq80db2nukug5c9609g3yrv85jok5lx6rwv6uksrjwnt35jhoqbglpuk1u3d89ksg3pxmlcuizqzc8c0l88ljl3ehkk5ekw03tco1ot012hcz1txwxgp0h5q7lnx241qap85v9ykchktrl6dhi3gmy1g9p37vkgong8suxhbsisn8amd4hlpiiytbuwb0ktt61mzg2v3wm9uss32c9i5qtkv52902psjk30ttu4blhs4t5zm6tq6hmy2vj6tgdkmpm7pv731a7uk1g4n7z90bk780onb6yv029z4a5j3n12ixc9etk764mnxz8 == \z\4\c\b\r\z\9\t\u\s\7\7\h\5\i\8\h\6\q\4\4\8\9\i\p\q\l\9\t\1\e\2\9\e\t\f\i\x\y\w\5\f\y\1\l\d\8\z\q\z\c\1\7\3\6\u\d\o\n\4\i\5\i\9\1\d\4\y\z\7\0\o\q\9\b\j\l\q\4\i\c\y\i\z\p\m\6\h\s\8\u\e\8\1\8\n\w\7\0\y\i\9\u\a\i\f\e\g\p\p\h\4\h\d\1\j\9\y\u\6\f\7\e\t\z\o\w\o\m\0\m\r\k\e\1\g\1\n\y\j\q\2\n\f\r\a\2\f\a\a\9\7\d\x\5\o\h\u\t\b\q\j\g\k\r\l\c\d\w\0\e\d\1\u\w\c\m\k\x\b\o\v\p\u\0\e\2\b\n\0\3\s\a\j\e\c\f\4\o\q\8\0\d\b\2\n\u\k\u\g\5\c\9\6\0\9\g\3\y\r\v\8\5\j\o\k\5\l\x\6\r\w\v\6\u\k\s\r\j\w\n\t\3\5\j\h\o\q\b\g\l\p\u\k\1\u\3\d\8\9\k\s\g\3\p\x\m\l\c\u\i\z\q\z\c\8\c\0\l\8\8\l\j\l\3\e\h\k\k\5\e\k\w\0\3\t\c\o\1\o\t\0\1\2\h\c\z\1\t\x\w\x\g\p\0\h\5\q\7\l\n\x\2\4\1\q\a\p\8\5\v\9\y\k\c\h\k\t\r\l\6\d\h\i\3\g\m\y\1\g\9\p\3\7\v\k\g\o\n\g\8\s\u\x\h\b\s\i\s\n\8\a\m\d\4\h\l\p\i\i\y\t\b\u\w\b\0\k\t\t\6\1\m\z\g\2\v\3\w\m\9\u\s\s\3\2\c\9\i\5\q\t\k\v\5\2\9\0\2\p\s\j\k\3\0\t\t\u\4\b\l\h\s\4\t\5\z\m\6\t\q\6\h\m\y\2\v\j\6\t\g\d\k\m\p\m\7\p\v\7\3\1\a\7\u\k\1\g\4\n\7\z\9\0\b\k\7\8\0\o\n\b\6\y\v\0\2\9\z\4\a\5\j\3\n\1\2\i\x\c\9\e\t\k\7\6\4\m\n\x\z\8 ]] 00:40:53.998 11:52:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:40:53.998 11:52:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:40:53.998 [2024-07-13 11:52:28.433558] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:53.998 [2024-07-13 11:52:28.434037] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168212 ] 00:40:53.998 [2024-07-13 11:52:28.603587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:54.257 [2024-07-13 11:52:28.795312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:55.451  Copying: 512/512 [B] (average 250 kBps) 00:40:55.451 00:40:55.451 ************************************ 00:40:55.451 END TEST dd_flags_misc 00:40:55.451 ************************************ 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ z4cbrz9tus77h5i8h6q4489ipql9t1e29etfixyw5fy1ld8zqzc1736udon4i5i91d4yz70oq9bjlq4icyizpm6hs8ue818nw70yi9uaifegpph4hd1j9yu6f7etzowom0mrke1g1nyjq2nfra2faa97dx5ohutbqjgkrlcdw0ed1uwcmkxbovpu0e2bn03sajecf4oq80db2nukug5c9609g3yrv85jok5lx6rwv6uksrjwnt35jhoqbglpuk1u3d89ksg3pxmlcuizqzc8c0l88ljl3ehkk5ekw03tco1ot012hcz1txwxgp0h5q7lnx241qap85v9ykchktrl6dhi3gmy1g9p37vkgong8suxhbsisn8amd4hlpiiytbuwb0ktt61mzg2v3wm9uss32c9i5qtkv52902psjk30ttu4blhs4t5zm6tq6hmy2vj6tgdkmpm7pv731a7uk1g4n7z90bk780onb6yv029z4a5j3n12ixc9etk764mnxz8 == \z\4\c\b\r\z\9\t\u\s\7\7\h\5\i\8\h\6\q\4\4\8\9\i\p\q\l\9\t\1\e\2\9\e\t\f\i\x\y\w\5\f\y\1\l\d\8\z\q\z\c\1\7\3\6\u\d\o\n\4\i\5\i\9\1\d\4\y\z\7\0\o\q\9\b\j\l\q\4\i\c\y\i\z\p\m\6\h\s\8\u\e\8\1\8\n\w\7\0\y\i\9\u\a\i\f\e\g\p\p\h\4\h\d\1\j\9\y\u\6\f\7\e\t\z\o\w\o\m\0\m\r\k\e\1\g\1\n\y\j\q\2\n\f\r\a\2\f\a\a\9\7\d\x\5\o\h\u\t\b\q\j\g\k\r\l\c\d\w\0\e\d\1\u\w\c\m\k\x\b\o\v\p\u\0\e\2\b\n\0\3\s\a\j\e\c\f\4\o\q\8\0\d\b\2\n\u\k\u\g\5\c\9\6\0\9\g\3\y\r\v\8\5\j\o\k\5\l\x\6\r\w\v\6\u\k\s\r\j\w\n\t\3\5\j\h\o\q\b\g\l\p\u\k\1\u\3\d\8\9\k\s\g\3\p\x\m\l\c\u\i\z\q\z\c\8\c\0\l\8\8\l\j\l\3\e\h\k\k\5\e\k\w\0\3\t\c\o\1\o\t\0\1\2\h\c\z\1\t\x\w\x\g\p\0\h\5\q\7\l\n\x\2\4\1\q\a\p\8\5\v\9\y\k\c\h\k\t\r\l\6\d\h\i\3\g\m\y\1\g\9\p\3\7\v\k\g\o\n\g\8\s\u\x\h\b\s\i\s\n\8\a\m\d\4\h\l\p\i\i\y\t\b\u\w\b\0\k\t\t\6\1\m\z\g\2\v\3\w\m\9\u\s\s\3\2\c\9\i\5\q\t\k\v\5\2\9\0\2\p\s\j\k\3\0\t\t\u\4\b\l\h\s\4\t\5\z\m\6\t\q\6\h\m\y\2\v\j\6\t\g\d\k\m\p\m\7\p\v\7\3\1\a\7\u\k\1\g\4\n\7\z\9\0\b\k\7\8\0\o\n\b\6\y\v\0\2\9\z\4\a\5\j\3\n\1\2\i\x\c\9\e\t\k\7\6\4\m\n\x\z\8 ]] 00:40:55.452 00:40:55.452 real 0m13.971s 00:40:55.452 user 0m10.757s 00:40:55.452 sys 0m2.110s 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:40:55.452 * Second test run, using AIO 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:55.452 ************************************ 00:40:55.452 START TEST dd_flag_append_forced_aio 00:40:55.452 ************************************ 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=rplt3370fd304n8aim79a2dgnmgr5irp 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=z9yj1uz0lr1f8fphnpl8ghclyy0a5fpe 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s rplt3370fd304n8aim79a2dgnmgr5irp 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s z9yj1uz0lr1f8fphnpl8ghclyy0a5fpe 00:40:55.452 11:52:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:40:55.711 [2024-07-13 11:52:30.253951] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:55.711 [2024-07-13 11:52:30.254376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168257 ] 00:40:55.711 [2024-07-13 11:52:30.424466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:55.970 [2024-07-13 11:52:30.618229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.606  Copying: 32/32 [B] (average 31 kBps) 00:40:57.606 00:40:57.606 ************************************ 00:40:57.606 END TEST dd_flag_append_forced_aio 00:40:57.606 ************************************ 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ z9yj1uz0lr1f8fphnpl8ghclyy0a5fperplt3370fd304n8aim79a2dgnmgr5irp == \z\9\y\j\1\u\z\0\l\r\1\f\8\f\p\h\n\p\l\8\g\h\c\l\y\y\0\a\5\f\p\e\r\p\l\t\3\3\7\0\f\d\3\0\4\n\8\a\i\m\7\9\a\2\d\g\n\m\g\r\5\i\r\p ]] 00:40:57.606 00:40:57.606 real 0m1.764s 00:40:57.606 user 0m1.341s 00:40:57.606 sys 0m0.290s 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:40:57.606 ************************************ 00:40:57.606 START TEST dd_flag_directory_forced_aio 00:40:57.606 ************************************ 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:57.606 11:52:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:40:57.606 [2024-07-13 11:52:32.071174] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:57.606 [2024-07-13 11:52:32.071558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168298 ] 00:40:57.606 [2024-07-13 11:52:32.240600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:57.865 [2024-07-13 11:52:32.435294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.124 [2024-07-13 11:52:32.718722] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:58.124 [2024-07-13 11:52:32.719118] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:58.124 [2024-07-13 11:52:32.719186] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:58.689 [2024-07-13 11:52:33.347813] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:40:59.255 11:52:33 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:40:59.255 [2024-07-13 11:52:33.786190] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:40:59.255 [2024-07-13 11:52:33.786710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168326 ] 00:40:59.255 [2024-07-13 11:52:33.959424] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.513 [2024-07-13 11:52:34.162582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:59.772 [2024-07-13 11:52:34.442255] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:59.772 [2024-07-13 11:52:34.442522] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:40:59.772 [2024-07-13 11:52:34.442591] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:00.340 [2024-07-13 11:52:35.070043] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:00.908 ************************************ 00:41:00.908 END TEST dd_flag_directory_forced_aio 00:41:00.908 ************************************ 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:00.908 00:41:00.908 real 0m3.438s 00:41:00.908 user 0m2.659s 00:41:00.908 sys 0m0.575s 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:00.908 ************************************ 00:41:00.908 START TEST dd_flag_nofollow_forced_aio 00:41:00.908 ************************************ 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:00.908 11:52:35 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:00.908 [2024-07-13 11:52:35.558921] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:00.908 [2024-07-13 11:52:35.559297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168377 ] 00:41:01.168 [2024-07-13 11:52:35.710986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:01.168 [2024-07-13 11:52:35.890834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:01.426 [2024-07-13 11:52:36.172114] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:01.426 [2024-07-13 11:52:36.172501] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:41:01.426 [2024-07-13 11:52:36.172568] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:02.363 [2024-07-13 11:52:36.800889] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:41:02.621 11:52:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:41:02.621 [2024-07-13 11:52:37.238020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:02.621 [2024-07-13 11:52:37.238412] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168407 ] 00:41:02.881 [2024-07-13 11:52:37.404263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:02.881 [2024-07-13 11:52:37.582658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:03.140 [2024-07-13 11:52:37.863056] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:03.140 [2024-07-13 11:52:37.863455] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:41:03.140 [2024-07-13 11:52:37.863524] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:04.075 [2024-07-13 11:52:38.492036] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:04.333 11:52:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:04.333 [2024-07-13 11:52:38.933604] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:04.333 [2024-07-13 11:52:38.934051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168439 ] 00:41:04.602 [2024-07-13 11:52:39.104037] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:04.602 [2024-07-13 11:52:39.296099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.242  Copying: 512/512 [B] (average 500 kBps) 00:41:06.242 00:41:06.242 ************************************ 00:41:06.242 END TEST dd_flag_nofollow_forced_aio 00:41:06.242 ************************************ 00:41:06.242 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ gls02x75q679ychgq4a1km0yeovx53i1iqqmjrikl4p1tffw75nhht9fsjw4x8i2395eiivscb2k2d8he8xw882wfqod90z6gl1y2nlc1ad1g34jjgwhkn80pjjaiizdusdhkx575ywz22cv09a6ducnwjznfha202s74bo4izvc7yon5eulr63u03wefpqrpjyic7oovr14091u10bt56jpfecsszyhy8avbtgaehwwkvlfpamfanhatbs0no3makhcsdy6ra2negxwafpwwi7a7bd9r0a63ev6ybcaby5ov7n13ifon1ws11edhmvk3yb0d9vx2qmlx3y9h8u768p250u653hz652sqq0ifojv63qodovujbqg1y7kjiq1qeu5gmhgyvwsqpat1wwr4ni6d8xe1meyvfhihw0k5a699yqi4f2ewglkgvbp9p72bxqityxsdlpkk22oh2832qkmqyb988mgolhh09a86lv7luou0tadkut34v90dpls == \g\l\s\0\2\x\7\5\q\6\7\9\y\c\h\g\q\4\a\1\k\m\0\y\e\o\v\x\5\3\i\1\i\q\q\m\j\r\i\k\l\4\p\1\t\f\f\w\7\5\n\h\h\t\9\f\s\j\w\4\x\8\i\2\3\9\5\e\i\i\v\s\c\b\2\k\2\d\8\h\e\8\x\w\8\8\2\w\f\q\o\d\9\0\z\6\g\l\1\y\2\n\l\c\1\a\d\1\g\3\4\j\j\g\w\h\k\n\8\0\p\j\j\a\i\i\z\d\u\s\d\h\k\x\5\7\5\y\w\z\2\2\c\v\0\9\a\6\d\u\c\n\w\j\z\n\f\h\a\2\0\2\s\7\4\b\o\4\i\z\v\c\7\y\o\n\5\e\u\l\r\6\3\u\0\3\w\e\f\p\q\r\p\j\y\i\c\7\o\o\v\r\1\4\0\9\1\u\1\0\b\t\5\6\j\p\f\e\c\s\s\z\y\h\y\8\a\v\b\t\g\a\e\h\w\w\k\v\l\f\p\a\m\f\a\n\h\a\t\b\s\0\n\o\3\m\a\k\h\c\s\d\y\6\r\a\2\n\e\g\x\w\a\f\p\w\w\i\7\a\7\b\d\9\r\0\a\6\3\e\v\6\y\b\c\a\b\y\5\o\v\7\n\1\3\i\f\o\n\1\w\s\1\1\e\d\h\m\v\k\3\y\b\0\d\9\v\x\2\q\m\l\x\3\y\9\h\8\u\7\6\8\p\2\5\0\u\6\5\3\h\z\6\5\2\s\q\q\0\i\f\o\j\v\6\3\q\o\d\o\v\u\j\b\q\g\1\y\7\k\j\i\q\1\q\e\u\5\g\m\h\g\y\v\w\s\q\p\a\t\1\w\w\r\4\n\i\6\d\8\x\e\1\m\e\y\v\f\h\i\h\w\0\k\5\a\6\9\9\y\q\i\4\f\2\e\w\g\l\k\g\v\b\p\9\p\7\2\b\x\q\i\t\y\x\s\d\l\p\k\k\2\2\o\h\2\8\3\2\q\k\m\q\y\b\9\8\8\m\g\o\l\h\h\0\9\a\8\6\l\v\7\l\u\o\u\0\t\a\d\k\u\t\3\4\v\9\0\d\p\l\s ]] 00:41:06.242 00:41:06.242 real 0m5.119s 00:41:06.242 user 0m3.974s 00:41:06.242 sys 0m0.805s 00:41:06.242 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:06.242 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:06.242 11:52:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:06.242 11:52:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:41:06.242 11:52:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:06.243 ************************************ 00:41:06.243 START TEST dd_flag_noatime_forced_aio 00:41:06.243 ************************************ 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720871559 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720871560 00:41:06.243 11:52:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:41:07.177 11:52:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:07.177 [2024-07-13 11:52:41.756639] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:07.177 [2024-07-13 11:52:41.757042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168502 ] 00:41:07.177 [2024-07-13 11:52:41.924119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:07.436 [2024-07-13 11:52:42.108228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:09.071  Copying: 512/512 [B] (average 500 kBps) 00:41:09.071 00:41:09.071 11:52:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:09.071 11:52:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720871559 )) 00:41:09.071 11:52:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:09.071 11:52:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720871560 )) 00:41:09.071 11:52:43 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:09.071 [2024-07-13 11:52:43.484169] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:09.071 [2024-07-13 11:52:43.484505] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168529 ] 00:41:09.071 [2024-07-13 11:52:43.636521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:09.071 [2024-07-13 11:52:43.822467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:10.578  Copying: 512/512 [B] (average 500 kBps) 00:41:10.578 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:10.578 ************************************ 00:41:10.578 END TEST dd_flag_noatime_forced_aio 00:41:10.578 ************************************ 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720871564 )) 00:41:10.578 00:41:10.578 real 0m4.475s 00:41:10.578 user 0m2.723s 00:41:10.578 sys 0m0.485s 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:10.578 ************************************ 00:41:10.578 START TEST dd_flags_misc_forced_aio 00:41:10.578 ************************************ 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:10.578 11:52:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:41:10.578 [2024-07-13 11:52:45.275549] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:10.578 [2024-07-13 11:52:45.275999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168565 ] 00:41:10.838 [2024-07-13 11:52:45.444283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:11.097 [2024-07-13 11:52:45.638697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:12.293  Copying: 512/512 [B] (average 500 kBps) 00:41:12.293 00:41:12.293 11:52:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ j5qlagkhzgtphvd9pjdog9cw9luy6k4dhgsbpvdrtv83p965proe6k1vi0p2jucvh02raomszso3dnd9no8nrmmg51kyzo3b2kqoe2hykste7zucthwvvg68kkoxmh31qndg26t3y6wu8ygwsdu9996j1icxwgydwrvwqe8a13glf6x9js9u7rq4ij1fwu08mygkazlp8z99mhjwfgq6opb8dww5obb81q2mo2rru8v434rnneaakts7juvs6sz1pj6d79yu827iikdvy4vxo67zdjxh87je6f3tt8owgaygr4vr4dbigpabnaptj667d0j73pz1z3xxzes6fxvynrm1totjxunwxzok8uccood2ozdfum82cxgq7y7do7thvw4xhifg6dinwlwumogdg2ay2tfho0uipw5dhfqckhji053ahzopzrnrgp4av23kkgph2aenepfmbhkyckrz4muvrap3ce32fwdxsdw2oyfwt3yhzw6ayx1ipoz13r5y == \j\5\q\l\a\g\k\h\z\g\t\p\h\v\d\9\p\j\d\o\g\9\c\w\9\l\u\y\6\k\4\d\h\g\s\b\p\v\d\r\t\v\8\3\p\9\6\5\p\r\o\e\6\k\1\v\i\0\p\2\j\u\c\v\h\0\2\r\a\o\m\s\z\s\o\3\d\n\d\9\n\o\8\n\r\m\m\g\5\1\k\y\z\o\3\b\2\k\q\o\e\2\h\y\k\s\t\e\7\z\u\c\t\h\w\v\v\g\6\8\k\k\o\x\m\h\3\1\q\n\d\g\2\6\t\3\y\6\w\u\8\y\g\w\s\d\u\9\9\9\6\j\1\i\c\x\w\g\y\d\w\r\v\w\q\e\8\a\1\3\g\l\f\6\x\9\j\s\9\u\7\r\q\4\i\j\1\f\w\u\0\8\m\y\g\k\a\z\l\p\8\z\9\9\m\h\j\w\f\g\q\6\o\p\b\8\d\w\w\5\o\b\b\8\1\q\2\m\o\2\r\r\u\8\v\4\3\4\r\n\n\e\a\a\k\t\s\7\j\u\v\s\6\s\z\1\p\j\6\d\7\9\y\u\8\2\7\i\i\k\d\v\y\4\v\x\o\6\7\z\d\j\x\h\8\7\j\e\6\f\3\t\t\8\o\w\g\a\y\g\r\4\v\r\4\d\b\i\g\p\a\b\n\a\p\t\j\6\6\7\d\0\j\7\3\p\z\1\z\3\x\x\z\e\s\6\f\x\v\y\n\r\m\1\t\o\t\j\x\u\n\w\x\z\o\k\8\u\c\c\o\o\d\2\o\z\d\f\u\m\8\2\c\x\g\q\7\y\7\d\o\7\t\h\v\w\4\x\h\i\f\g\6\d\i\n\w\l\w\u\m\o\g\d\g\2\a\y\2\t\f\h\o\0\u\i\p\w\5\d\h\f\q\c\k\h\j\i\0\5\3\a\h\z\o\p\z\r\n\r\g\p\4\a\v\2\3\k\k\g\p\h\2\a\e\n\e\p\f\m\b\h\k\y\c\k\r\z\4\m\u\v\r\a\p\3\c\e\3\2\f\w\d\x\s\d\w\2\o\y\f\w\t\3\y\h\z\w\6\a\y\x\1\i\p\o\z\1\3\r\5\y ]] 00:41:12.293 11:52:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:12.293 11:52:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:41:12.293 [2024-07-13 11:52:47.023787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:12.293 [2024-07-13 11:52:47.024229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168593 ] 00:41:12.552 [2024-07-13 11:52:47.192607] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:12.811 [2024-07-13 11:52:47.397038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:14.007  Copying: 512/512 [B] (average 500 kBps) 00:41:14.007 00:41:14.007 11:52:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ j5qlagkhzgtphvd9pjdog9cw9luy6k4dhgsbpvdrtv83p965proe6k1vi0p2jucvh02raomszso3dnd9no8nrmmg51kyzo3b2kqoe2hykste7zucthwvvg68kkoxmh31qndg26t3y6wu8ygwsdu9996j1icxwgydwrvwqe8a13glf6x9js9u7rq4ij1fwu08mygkazlp8z99mhjwfgq6opb8dww5obb81q2mo2rru8v434rnneaakts7juvs6sz1pj6d79yu827iikdvy4vxo67zdjxh87je6f3tt8owgaygr4vr4dbigpabnaptj667d0j73pz1z3xxzes6fxvynrm1totjxunwxzok8uccood2ozdfum82cxgq7y7do7thvw4xhifg6dinwlwumogdg2ay2tfho0uipw5dhfqckhji053ahzopzrnrgp4av23kkgph2aenepfmbhkyckrz4muvrap3ce32fwdxsdw2oyfwt3yhzw6ayx1ipoz13r5y == \j\5\q\l\a\g\k\h\z\g\t\p\h\v\d\9\p\j\d\o\g\9\c\w\9\l\u\y\6\k\4\d\h\g\s\b\p\v\d\r\t\v\8\3\p\9\6\5\p\r\o\e\6\k\1\v\i\0\p\2\j\u\c\v\h\0\2\r\a\o\m\s\z\s\o\3\d\n\d\9\n\o\8\n\r\m\m\g\5\1\k\y\z\o\3\b\2\k\q\o\e\2\h\y\k\s\t\e\7\z\u\c\t\h\w\v\v\g\6\8\k\k\o\x\m\h\3\1\q\n\d\g\2\6\t\3\y\6\w\u\8\y\g\w\s\d\u\9\9\9\6\j\1\i\c\x\w\g\y\d\w\r\v\w\q\e\8\a\1\3\g\l\f\6\x\9\j\s\9\u\7\r\q\4\i\j\1\f\w\u\0\8\m\y\g\k\a\z\l\p\8\z\9\9\m\h\j\w\f\g\q\6\o\p\b\8\d\w\w\5\o\b\b\8\1\q\2\m\o\2\r\r\u\8\v\4\3\4\r\n\n\e\a\a\k\t\s\7\j\u\v\s\6\s\z\1\p\j\6\d\7\9\y\u\8\2\7\i\i\k\d\v\y\4\v\x\o\6\7\z\d\j\x\h\8\7\j\e\6\f\3\t\t\8\o\w\g\a\y\g\r\4\v\r\4\d\b\i\g\p\a\b\n\a\p\t\j\6\6\7\d\0\j\7\3\p\z\1\z\3\x\x\z\e\s\6\f\x\v\y\n\r\m\1\t\o\t\j\x\u\n\w\x\z\o\k\8\u\c\c\o\o\d\2\o\z\d\f\u\m\8\2\c\x\g\q\7\y\7\d\o\7\t\h\v\w\4\x\h\i\f\g\6\d\i\n\w\l\w\u\m\o\g\d\g\2\a\y\2\t\f\h\o\0\u\i\p\w\5\d\h\f\q\c\k\h\j\i\0\5\3\a\h\z\o\p\z\r\n\r\g\p\4\a\v\2\3\k\k\g\p\h\2\a\e\n\e\p\f\m\b\h\k\y\c\k\r\z\4\m\u\v\r\a\p\3\c\e\3\2\f\w\d\x\s\d\w\2\o\y\f\w\t\3\y\h\z\w\6\a\y\x\1\i\p\o\z\1\3\r\5\y ]] 00:41:14.007 11:52:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:14.007 11:52:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:41:14.266 [2024-07-13 11:52:48.785867] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:14.266 [2024-07-13 11:52:48.786316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168632 ] 00:41:14.266 [2024-07-13 11:52:48.950059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:14.525 [2024-07-13 11:52:49.129545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:15.723  Copying: 512/512 [B] (average 166 kBps) 00:41:15.723 00:41:15.723 11:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ j5qlagkhzgtphvd9pjdog9cw9luy6k4dhgsbpvdrtv83p965proe6k1vi0p2jucvh02raomszso3dnd9no8nrmmg51kyzo3b2kqoe2hykste7zucthwvvg68kkoxmh31qndg26t3y6wu8ygwsdu9996j1icxwgydwrvwqe8a13glf6x9js9u7rq4ij1fwu08mygkazlp8z99mhjwfgq6opb8dww5obb81q2mo2rru8v434rnneaakts7juvs6sz1pj6d79yu827iikdvy4vxo67zdjxh87je6f3tt8owgaygr4vr4dbigpabnaptj667d0j73pz1z3xxzes6fxvynrm1totjxunwxzok8uccood2ozdfum82cxgq7y7do7thvw4xhifg6dinwlwumogdg2ay2tfho0uipw5dhfqckhji053ahzopzrnrgp4av23kkgph2aenepfmbhkyckrz4muvrap3ce32fwdxsdw2oyfwt3yhzw6ayx1ipoz13r5y == \j\5\q\l\a\g\k\h\z\g\t\p\h\v\d\9\p\j\d\o\g\9\c\w\9\l\u\y\6\k\4\d\h\g\s\b\p\v\d\r\t\v\8\3\p\9\6\5\p\r\o\e\6\k\1\v\i\0\p\2\j\u\c\v\h\0\2\r\a\o\m\s\z\s\o\3\d\n\d\9\n\o\8\n\r\m\m\g\5\1\k\y\z\o\3\b\2\k\q\o\e\2\h\y\k\s\t\e\7\z\u\c\t\h\w\v\v\g\6\8\k\k\o\x\m\h\3\1\q\n\d\g\2\6\t\3\y\6\w\u\8\y\g\w\s\d\u\9\9\9\6\j\1\i\c\x\w\g\y\d\w\r\v\w\q\e\8\a\1\3\g\l\f\6\x\9\j\s\9\u\7\r\q\4\i\j\1\f\w\u\0\8\m\y\g\k\a\z\l\p\8\z\9\9\m\h\j\w\f\g\q\6\o\p\b\8\d\w\w\5\o\b\b\8\1\q\2\m\o\2\r\r\u\8\v\4\3\4\r\n\n\e\a\a\k\t\s\7\j\u\v\s\6\s\z\1\p\j\6\d\7\9\y\u\8\2\7\i\i\k\d\v\y\4\v\x\o\6\7\z\d\j\x\h\8\7\j\e\6\f\3\t\t\8\o\w\g\a\y\g\r\4\v\r\4\d\b\i\g\p\a\b\n\a\p\t\j\6\6\7\d\0\j\7\3\p\z\1\z\3\x\x\z\e\s\6\f\x\v\y\n\r\m\1\t\o\t\j\x\u\n\w\x\z\o\k\8\u\c\c\o\o\d\2\o\z\d\f\u\m\8\2\c\x\g\q\7\y\7\d\o\7\t\h\v\w\4\x\h\i\f\g\6\d\i\n\w\l\w\u\m\o\g\d\g\2\a\y\2\t\f\h\o\0\u\i\p\w\5\d\h\f\q\c\k\h\j\i\0\5\3\a\h\z\o\p\z\r\n\r\g\p\4\a\v\2\3\k\k\g\p\h\2\a\e\n\e\p\f\m\b\h\k\y\c\k\r\z\4\m\u\v\r\a\p\3\c\e\3\2\f\w\d\x\s\d\w\2\o\y\f\w\t\3\y\h\z\w\6\a\y\x\1\i\p\o\z\1\3\r\5\y ]] 00:41:15.723 11:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:15.723 11:52:50 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:41:15.982 [2024-07-13 11:52:50.521492] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:15.982 [2024-07-13 11:52:50.521895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168653 ] 00:41:15.982 [2024-07-13 11:52:50.687396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:16.241 [2024-07-13 11:52:50.868913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:17.439  Copying: 512/512 [B] (average 166 kBps) 00:41:17.439 00:41:17.714 11:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ j5qlagkhzgtphvd9pjdog9cw9luy6k4dhgsbpvdrtv83p965proe6k1vi0p2jucvh02raomszso3dnd9no8nrmmg51kyzo3b2kqoe2hykste7zucthwvvg68kkoxmh31qndg26t3y6wu8ygwsdu9996j1icxwgydwrvwqe8a13glf6x9js9u7rq4ij1fwu08mygkazlp8z99mhjwfgq6opb8dww5obb81q2mo2rru8v434rnneaakts7juvs6sz1pj6d79yu827iikdvy4vxo67zdjxh87je6f3tt8owgaygr4vr4dbigpabnaptj667d0j73pz1z3xxzes6fxvynrm1totjxunwxzok8uccood2ozdfum82cxgq7y7do7thvw4xhifg6dinwlwumogdg2ay2tfho0uipw5dhfqckhji053ahzopzrnrgp4av23kkgph2aenepfmbhkyckrz4muvrap3ce32fwdxsdw2oyfwt3yhzw6ayx1ipoz13r5y == \j\5\q\l\a\g\k\h\z\g\t\p\h\v\d\9\p\j\d\o\g\9\c\w\9\l\u\y\6\k\4\d\h\g\s\b\p\v\d\r\t\v\8\3\p\9\6\5\p\r\o\e\6\k\1\v\i\0\p\2\j\u\c\v\h\0\2\r\a\o\m\s\z\s\o\3\d\n\d\9\n\o\8\n\r\m\m\g\5\1\k\y\z\o\3\b\2\k\q\o\e\2\h\y\k\s\t\e\7\z\u\c\t\h\w\v\v\g\6\8\k\k\o\x\m\h\3\1\q\n\d\g\2\6\t\3\y\6\w\u\8\y\g\w\s\d\u\9\9\9\6\j\1\i\c\x\w\g\y\d\w\r\v\w\q\e\8\a\1\3\g\l\f\6\x\9\j\s\9\u\7\r\q\4\i\j\1\f\w\u\0\8\m\y\g\k\a\z\l\p\8\z\9\9\m\h\j\w\f\g\q\6\o\p\b\8\d\w\w\5\o\b\b\8\1\q\2\m\o\2\r\r\u\8\v\4\3\4\r\n\n\e\a\a\k\t\s\7\j\u\v\s\6\s\z\1\p\j\6\d\7\9\y\u\8\2\7\i\i\k\d\v\y\4\v\x\o\6\7\z\d\j\x\h\8\7\j\e\6\f\3\t\t\8\o\w\g\a\y\g\r\4\v\r\4\d\b\i\g\p\a\b\n\a\p\t\j\6\6\7\d\0\j\7\3\p\z\1\z\3\x\x\z\e\s\6\f\x\v\y\n\r\m\1\t\o\t\j\x\u\n\w\x\z\o\k\8\u\c\c\o\o\d\2\o\z\d\f\u\m\8\2\c\x\g\q\7\y\7\d\o\7\t\h\v\w\4\x\h\i\f\g\6\d\i\n\w\l\w\u\m\o\g\d\g\2\a\y\2\t\f\h\o\0\u\i\p\w\5\d\h\f\q\c\k\h\j\i\0\5\3\a\h\z\o\p\z\r\n\r\g\p\4\a\v\2\3\k\k\g\p\h\2\a\e\n\e\p\f\m\b\h\k\y\c\k\r\z\4\m\u\v\r\a\p\3\c\e\3\2\f\w\d\x\s\d\w\2\o\y\f\w\t\3\y\h\z\w\6\a\y\x\1\i\p\o\z\1\3\r\5\y ]] 00:41:17.714 11:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:41:17.714 11:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:41:17.714 11:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:41:17.714 11:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:17.714 11:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:17.714 11:52:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:41:17.714 [2024-07-13 11:52:52.264435] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:17.715 [2024-07-13 11:52:52.265447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168677 ] 00:41:17.715 [2024-07-13 11:52:52.439880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:17.990 [2024-07-13 11:52:52.642190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:19.205  Copying: 512/512 [B] (average 500 kBps) 00:41:19.205 00:41:19.464 11:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dcwfwi4mkwqqkdbkul6fiqno7otxe62ylkskc15t9j8qnc63gpcblgqeirvtiu9yt68v7b3t3wzk4t0cccaoqx8z3q4chguam4o7o1vxhlxx430b4qagts9pabbmn96kkfysdtzjz3vluym0llfrlslf4bx8ot45g98gxwd32ww8f3n0akno5lrnmhcvpkdj1lw30rfzwg66zfnxiwi207l24qpp9t242gzij8x7omcuwycw0fuoeqsmjbgwdw7t0tkr63tu8o6oyaeiq0ilkdic6f43uzwi5ra3tx2d92dzgi69fh1i62q3lyul6kc0c3uomltlsarylrmswz6xo3gq9hoe5v900qjrjvz4m54zk0gc9yrdneevz4llihlub79p9ne7kcrkziio5qibmodyxdyhhvljy0zw5ff903rksarjihhsej0ut54855leag895oabxkxx5wk3ws2sesz3cxsriyry6s5ipreoivoynpylbhlxd61wtmk00kli == \d\c\w\f\w\i\4\m\k\w\q\q\k\d\b\k\u\l\6\f\i\q\n\o\7\o\t\x\e\6\2\y\l\k\s\k\c\1\5\t\9\j\8\q\n\c\6\3\g\p\c\b\l\g\q\e\i\r\v\t\i\u\9\y\t\6\8\v\7\b\3\t\3\w\z\k\4\t\0\c\c\c\a\o\q\x\8\z\3\q\4\c\h\g\u\a\m\4\o\7\o\1\v\x\h\l\x\x\4\3\0\b\4\q\a\g\t\s\9\p\a\b\b\m\n\9\6\k\k\f\y\s\d\t\z\j\z\3\v\l\u\y\m\0\l\l\f\r\l\s\l\f\4\b\x\8\o\t\4\5\g\9\8\g\x\w\d\3\2\w\w\8\f\3\n\0\a\k\n\o\5\l\r\n\m\h\c\v\p\k\d\j\1\l\w\3\0\r\f\z\w\g\6\6\z\f\n\x\i\w\i\2\0\7\l\2\4\q\p\p\9\t\2\4\2\g\z\i\j\8\x\7\o\m\c\u\w\y\c\w\0\f\u\o\e\q\s\m\j\b\g\w\d\w\7\t\0\t\k\r\6\3\t\u\8\o\6\o\y\a\e\i\q\0\i\l\k\d\i\c\6\f\4\3\u\z\w\i\5\r\a\3\t\x\2\d\9\2\d\z\g\i\6\9\f\h\1\i\6\2\q\3\l\y\u\l\6\k\c\0\c\3\u\o\m\l\t\l\s\a\r\y\l\r\m\s\w\z\6\x\o\3\g\q\9\h\o\e\5\v\9\0\0\q\j\r\j\v\z\4\m\5\4\z\k\0\g\c\9\y\r\d\n\e\e\v\z\4\l\l\i\h\l\u\b\7\9\p\9\n\e\7\k\c\r\k\z\i\i\o\5\q\i\b\m\o\d\y\x\d\y\h\h\v\l\j\y\0\z\w\5\f\f\9\0\3\r\k\s\a\r\j\i\h\h\s\e\j\0\u\t\5\4\8\5\5\l\e\a\g\8\9\5\o\a\b\x\k\x\x\5\w\k\3\w\s\2\s\e\s\z\3\c\x\s\r\i\y\r\y\6\s\5\i\p\r\e\o\i\v\o\y\n\p\y\l\b\h\l\x\d\6\1\w\t\m\k\0\0\k\l\i ]] 00:41:19.464 11:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:19.464 11:52:53 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:41:19.464 [2024-07-13 11:52:54.032539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:19.464 [2024-07-13 11:52:54.032977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168702 ] 00:41:19.723 [2024-07-13 11:52:54.219046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:19.723 [2024-07-13 11:52:54.408461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.358  Copying: 512/512 [B] (average 500 kBps) 00:41:21.358 00:41:21.359 11:52:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dcwfwi4mkwqqkdbkul6fiqno7otxe62ylkskc15t9j8qnc63gpcblgqeirvtiu9yt68v7b3t3wzk4t0cccaoqx8z3q4chguam4o7o1vxhlxx430b4qagts9pabbmn96kkfysdtzjz3vluym0llfrlslf4bx8ot45g98gxwd32ww8f3n0akno5lrnmhcvpkdj1lw30rfzwg66zfnxiwi207l24qpp9t242gzij8x7omcuwycw0fuoeqsmjbgwdw7t0tkr63tu8o6oyaeiq0ilkdic6f43uzwi5ra3tx2d92dzgi69fh1i62q3lyul6kc0c3uomltlsarylrmswz6xo3gq9hoe5v900qjrjvz4m54zk0gc9yrdneevz4llihlub79p9ne7kcrkziio5qibmodyxdyhhvljy0zw5ff903rksarjihhsej0ut54855leag895oabxkxx5wk3ws2sesz3cxsriyry6s5ipreoivoynpylbhlxd61wtmk00kli == \d\c\w\f\w\i\4\m\k\w\q\q\k\d\b\k\u\l\6\f\i\q\n\o\7\o\t\x\e\6\2\y\l\k\s\k\c\1\5\t\9\j\8\q\n\c\6\3\g\p\c\b\l\g\q\e\i\r\v\t\i\u\9\y\t\6\8\v\7\b\3\t\3\w\z\k\4\t\0\c\c\c\a\o\q\x\8\z\3\q\4\c\h\g\u\a\m\4\o\7\o\1\v\x\h\l\x\x\4\3\0\b\4\q\a\g\t\s\9\p\a\b\b\m\n\9\6\k\k\f\y\s\d\t\z\j\z\3\v\l\u\y\m\0\l\l\f\r\l\s\l\f\4\b\x\8\o\t\4\5\g\9\8\g\x\w\d\3\2\w\w\8\f\3\n\0\a\k\n\o\5\l\r\n\m\h\c\v\p\k\d\j\1\l\w\3\0\r\f\z\w\g\6\6\z\f\n\x\i\w\i\2\0\7\l\2\4\q\p\p\9\t\2\4\2\g\z\i\j\8\x\7\o\m\c\u\w\y\c\w\0\f\u\o\e\q\s\m\j\b\g\w\d\w\7\t\0\t\k\r\6\3\t\u\8\o\6\o\y\a\e\i\q\0\i\l\k\d\i\c\6\f\4\3\u\z\w\i\5\r\a\3\t\x\2\d\9\2\d\z\g\i\6\9\f\h\1\i\6\2\q\3\l\y\u\l\6\k\c\0\c\3\u\o\m\l\t\l\s\a\r\y\l\r\m\s\w\z\6\x\o\3\g\q\9\h\o\e\5\v\9\0\0\q\j\r\j\v\z\4\m\5\4\z\k\0\g\c\9\y\r\d\n\e\e\v\z\4\l\l\i\h\l\u\b\7\9\p\9\n\e\7\k\c\r\k\z\i\i\o\5\q\i\b\m\o\d\y\x\d\y\h\h\v\l\j\y\0\z\w\5\f\f\9\0\3\r\k\s\a\r\j\i\h\h\s\e\j\0\u\t\5\4\8\5\5\l\e\a\g\8\9\5\o\a\b\x\k\x\x\5\w\k\3\w\s\2\s\e\s\z\3\c\x\s\r\i\y\r\y\6\s\5\i\p\r\e\o\i\v\o\y\n\p\y\l\b\h\l\x\d\6\1\w\t\m\k\0\0\k\l\i ]] 00:41:21.359 11:52:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:21.359 11:52:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:41:21.359 [2024-07-13 11:52:55.791994] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:21.359 [2024-07-13 11:52:55.793023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168726 ] 00:41:21.359 [2024-07-13 11:52:55.961092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:21.618 [2024-07-13 11:52:56.153818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:22.813  Copying: 512/512 [B] (average 166 kBps) 00:41:22.813 00:41:22.813 11:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dcwfwi4mkwqqkdbkul6fiqno7otxe62ylkskc15t9j8qnc63gpcblgqeirvtiu9yt68v7b3t3wzk4t0cccaoqx8z3q4chguam4o7o1vxhlxx430b4qagts9pabbmn96kkfysdtzjz3vluym0llfrlslf4bx8ot45g98gxwd32ww8f3n0akno5lrnmhcvpkdj1lw30rfzwg66zfnxiwi207l24qpp9t242gzij8x7omcuwycw0fuoeqsmjbgwdw7t0tkr63tu8o6oyaeiq0ilkdic6f43uzwi5ra3tx2d92dzgi69fh1i62q3lyul6kc0c3uomltlsarylrmswz6xo3gq9hoe5v900qjrjvz4m54zk0gc9yrdneevz4llihlub79p9ne7kcrkziio5qibmodyxdyhhvljy0zw5ff903rksarjihhsej0ut54855leag895oabxkxx5wk3ws2sesz3cxsriyry6s5ipreoivoynpylbhlxd61wtmk00kli == \d\c\w\f\w\i\4\m\k\w\q\q\k\d\b\k\u\l\6\f\i\q\n\o\7\o\t\x\e\6\2\y\l\k\s\k\c\1\5\t\9\j\8\q\n\c\6\3\g\p\c\b\l\g\q\e\i\r\v\t\i\u\9\y\t\6\8\v\7\b\3\t\3\w\z\k\4\t\0\c\c\c\a\o\q\x\8\z\3\q\4\c\h\g\u\a\m\4\o\7\o\1\v\x\h\l\x\x\4\3\0\b\4\q\a\g\t\s\9\p\a\b\b\m\n\9\6\k\k\f\y\s\d\t\z\j\z\3\v\l\u\y\m\0\l\l\f\r\l\s\l\f\4\b\x\8\o\t\4\5\g\9\8\g\x\w\d\3\2\w\w\8\f\3\n\0\a\k\n\o\5\l\r\n\m\h\c\v\p\k\d\j\1\l\w\3\0\r\f\z\w\g\6\6\z\f\n\x\i\w\i\2\0\7\l\2\4\q\p\p\9\t\2\4\2\g\z\i\j\8\x\7\o\m\c\u\w\y\c\w\0\f\u\o\e\q\s\m\j\b\g\w\d\w\7\t\0\t\k\r\6\3\t\u\8\o\6\o\y\a\e\i\q\0\i\l\k\d\i\c\6\f\4\3\u\z\w\i\5\r\a\3\t\x\2\d\9\2\d\z\g\i\6\9\f\h\1\i\6\2\q\3\l\y\u\l\6\k\c\0\c\3\u\o\m\l\t\l\s\a\r\y\l\r\m\s\w\z\6\x\o\3\g\q\9\h\o\e\5\v\9\0\0\q\j\r\j\v\z\4\m\5\4\z\k\0\g\c\9\y\r\d\n\e\e\v\z\4\l\l\i\h\l\u\b\7\9\p\9\n\e\7\k\c\r\k\z\i\i\o\5\q\i\b\m\o\d\y\x\d\y\h\h\v\l\j\y\0\z\w\5\f\f\9\0\3\r\k\s\a\r\j\i\h\h\s\e\j\0\u\t\5\4\8\5\5\l\e\a\g\8\9\5\o\a\b\x\k\x\x\5\w\k\3\w\s\2\s\e\s\z\3\c\x\s\r\i\y\r\y\6\s\5\i\p\r\e\o\i\v\o\y\n\p\y\l\b\h\l\x\d\6\1\w\t\m\k\0\0\k\l\i ]] 00:41:22.813 11:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:41:22.813 11:52:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:41:22.813 [2024-07-13 11:52:57.548370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:22.813 [2024-07-13 11:52:57.548819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168743 ] 00:41:23.071 [2024-07-13 11:52:57.719569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.329 [2024-07-13 11:52:57.916316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:24.520  Copying: 512/512 [B] (average 125 kBps) 00:41:24.520 00:41:24.520 ************************************ 00:41:24.520 END TEST dd_flags_misc_forced_aio 00:41:24.520 ************************************ 00:41:24.520 11:52:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dcwfwi4mkwqqkdbkul6fiqno7otxe62ylkskc15t9j8qnc63gpcblgqeirvtiu9yt68v7b3t3wzk4t0cccaoqx8z3q4chguam4o7o1vxhlxx430b4qagts9pabbmn96kkfysdtzjz3vluym0llfrlslf4bx8ot45g98gxwd32ww8f3n0akno5lrnmhcvpkdj1lw30rfzwg66zfnxiwi207l24qpp9t242gzij8x7omcuwycw0fuoeqsmjbgwdw7t0tkr63tu8o6oyaeiq0ilkdic6f43uzwi5ra3tx2d92dzgi69fh1i62q3lyul6kc0c3uomltlsarylrmswz6xo3gq9hoe5v900qjrjvz4m54zk0gc9yrdneevz4llihlub79p9ne7kcrkziio5qibmodyxdyhhvljy0zw5ff903rksarjihhsej0ut54855leag895oabxkxx5wk3ws2sesz3cxsriyry6s5ipreoivoynpylbhlxd61wtmk00kli == \d\c\w\f\w\i\4\m\k\w\q\q\k\d\b\k\u\l\6\f\i\q\n\o\7\o\t\x\e\6\2\y\l\k\s\k\c\1\5\t\9\j\8\q\n\c\6\3\g\p\c\b\l\g\q\e\i\r\v\t\i\u\9\y\t\6\8\v\7\b\3\t\3\w\z\k\4\t\0\c\c\c\a\o\q\x\8\z\3\q\4\c\h\g\u\a\m\4\o\7\o\1\v\x\h\l\x\x\4\3\0\b\4\q\a\g\t\s\9\p\a\b\b\m\n\9\6\k\k\f\y\s\d\t\z\j\z\3\v\l\u\y\m\0\l\l\f\r\l\s\l\f\4\b\x\8\o\t\4\5\g\9\8\g\x\w\d\3\2\w\w\8\f\3\n\0\a\k\n\o\5\l\r\n\m\h\c\v\p\k\d\j\1\l\w\3\0\r\f\z\w\g\6\6\z\f\n\x\i\w\i\2\0\7\l\2\4\q\p\p\9\t\2\4\2\g\z\i\j\8\x\7\o\m\c\u\w\y\c\w\0\f\u\o\e\q\s\m\j\b\g\w\d\w\7\t\0\t\k\r\6\3\t\u\8\o\6\o\y\a\e\i\q\0\i\l\k\d\i\c\6\f\4\3\u\z\w\i\5\r\a\3\t\x\2\d\9\2\d\z\g\i\6\9\f\h\1\i\6\2\q\3\l\y\u\l\6\k\c\0\c\3\u\o\m\l\t\l\s\a\r\y\l\r\m\s\w\z\6\x\o\3\g\q\9\h\o\e\5\v\9\0\0\q\j\r\j\v\z\4\m\5\4\z\k\0\g\c\9\y\r\d\n\e\e\v\z\4\l\l\i\h\l\u\b\7\9\p\9\n\e\7\k\c\r\k\z\i\i\o\5\q\i\b\m\o\d\y\x\d\y\h\h\v\l\j\y\0\z\w\5\f\f\9\0\3\r\k\s\a\r\j\i\h\h\s\e\j\0\u\t\5\4\8\5\5\l\e\a\g\8\9\5\o\a\b\x\k\x\x\5\w\k\3\w\s\2\s\e\s\z\3\c\x\s\r\i\y\r\y\6\s\5\i\p\r\e\o\i\v\o\y\n\p\y\l\b\h\l\x\d\6\1\w\t\m\k\0\0\k\l\i ]] 00:41:24.520 00:41:24.520 real 0m14.038s 00:41:24.520 user 0m10.773s 00:41:24.520 sys 0m2.185s 00:41:24.520 11:52:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:24.520 11:52:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:41:24.778 11:52:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:41:24.778 11:52:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:41:24.778 11:52:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:41:24.778 11:52:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:41:24.778 ************************************ 00:41:24.778 END TEST spdk_dd_posix 00:41:24.778 ************************************ 00:41:24.778 00:41:24.778 real 0m58.313s 00:41:24.778 user 0m43.213s 00:41:24.778 sys 0m8.951s 00:41:24.778 11:52:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:24.778 11:52:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:41:24.778 11:52:59 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:41:24.778 11:52:59 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:41:24.778 11:52:59 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:24.778 11:52:59 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:24.778 11:52:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:24.778 ************************************ 00:41:24.778 START TEST spdk_dd_malloc 00:41:24.778 ************************************ 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:41:24.778 * Looking for test storage... 00:41:24.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:41:24.778 ************************************ 00:41:24.778 START TEST dd_malloc_copy 00:41:24.778 ************************************ 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:41:24.778 11:52:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:41:24.778 [2024-07-13 11:52:59.494963] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:24.778 [2024-07-13 11:52:59.495300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168860 ] 00:41:24.778 { 00:41:24.778 "subsystems": [ 00:41:24.778 { 00:41:24.778 "subsystem": "bdev", 00:41:24.778 "config": [ 00:41:24.778 { 00:41:24.778 "params": { 00:41:24.778 "num_blocks": 1048576, 00:41:24.778 "block_size": 512, 00:41:24.778 "name": "malloc0" 00:41:24.778 }, 00:41:24.778 "method": "bdev_malloc_create" 00:41:24.778 }, 00:41:24.778 { 00:41:24.778 "params": { 00:41:24.778 "num_blocks": 1048576, 00:41:24.778 "block_size": 512, 00:41:24.778 "name": "malloc1" 00:41:24.778 }, 00:41:24.778 "method": "bdev_malloc_create" 00:41:24.778 }, 00:41:24.778 { 00:41:24.778 "method": "bdev_wait_for_examine" 00:41:24.778 } 00:41:24.778 ] 00:41:24.778 } 00:41:24.778 ] 00:41:24.778 } 00:41:25.036 [2024-07-13 11:52:59.661310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:25.294 [2024-07-13 11:52:59.848576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:32.273  Copying: 216/512 [MB] (216 MBps) Copying: 432/512 [MB] (216 MBps) Copying: 512/512 [MB] (average 216 MBps) 00:41:32.273 00:41:32.273 11:53:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:41:32.273 11:53:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:41:32.273 11:53:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:41:32.273 11:53:06 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:41:32.273 [2024-07-13 11:53:06.791390] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:32.273 [2024-07-13 11:53:06.791788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid168947 ] 00:41:32.273 { 00:41:32.273 "subsystems": [ 00:41:32.273 { 00:41:32.273 "subsystem": "bdev", 00:41:32.273 "config": [ 00:41:32.273 { 00:41:32.273 "params": { 00:41:32.273 "num_blocks": 1048576, 00:41:32.273 "block_size": 512, 00:41:32.273 "name": "malloc0" 00:41:32.273 }, 00:41:32.273 "method": "bdev_malloc_create" 00:41:32.273 }, 00:41:32.273 { 00:41:32.273 "params": { 00:41:32.273 "num_blocks": 1048576, 00:41:32.273 "block_size": 512, 00:41:32.273 "name": "malloc1" 00:41:32.273 }, 00:41:32.273 "method": "bdev_malloc_create" 00:41:32.273 }, 00:41:32.273 { 00:41:32.273 "method": "bdev_wait_for_examine" 00:41:32.273 } 00:41:32.273 ] 00:41:32.273 } 00:41:32.273 ] 00:41:32.273 } 00:41:32.273 [2024-07-13 11:53:06.958874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.532 [2024-07-13 11:53:07.139338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:39.546  Copying: 217/512 [MB] (217 MBps) Copying: 436/512 [MB] (218 MBps) Copying: 512/512 [MB] (average 218 MBps) 00:41:39.546 00:41:39.546 ************************************ 00:41:39.546 END TEST dd_malloc_copy 00:41:39.546 ************************************ 00:41:39.546 00:41:39.546 real 0m14.570s 00:41:39.546 user 0m13.036s 00:41:39.546 sys 0m1.412s 00:41:39.546 11:53:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:39.546 11:53:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:41:39.546 11:53:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:41:39.546 00:41:39.546 real 0m14.705s 00:41:39.546 user 0m13.105s 00:41:39.546 sys 0m1.477s 00:41:39.546 11:53:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:39.546 ************************************ 00:41:39.546 END TEST spdk_dd_malloc 00:41:39.546 ************************************ 00:41:39.546 11:53:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:41:39.546 11:53:14 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:41:39.546 11:53:14 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:41:39.546 11:53:14 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:41:39.546 11:53:14 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:39.546 11:53:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:39.546 ************************************ 00:41:39.547 START TEST spdk_dd_bdev_to_bdev 00:41:39.547 ************************************ 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:41:39.547 * Looking for test storage... 00:41:39.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:41:39.547 11:53:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:41:39.547 [2024-07-13 11:53:14.243779] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:39.547 [2024-07-13 11:53:14.244180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169118 ] 00:41:39.805 [2024-07-13 11:53:14.414960] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:40.063 [2024-07-13 11:53:14.598321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:41.565  Copying: 256/256 [MB] (average 1224 MBps) 00:41:41.565 00:41:41.565 11:53:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:41:41.565 11:53:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:41:41.565 11:53:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:41:41.565 11:53:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:41:41.565 11:53:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:41:41.565 11:53:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:41:41.565 11:53:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:41.565 11:53:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:41.565 ************************************ 00:41:41.565 START TEST dd_inflate_file 00:41:41.565 ************************************ 00:41:41.565 11:53:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:41:41.565 [2024-07-13 11:53:16.233834] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:41.565 [2024-07-13 11:53:16.234875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169147 ] 00:41:41.823 [2024-07-13 11:53:16.403869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:42.082 [2024-07-13 11:53:16.594971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:43.278  Copying: 64/64 [MB] (average 1306 MBps) 00:41:43.278 00:41:43.278 ************************************ 00:41:43.278 END TEST dd_inflate_file 00:41:43.278 ************************************ 00:41:43.278 00:41:43.278 real 0m1.805s 00:41:43.278 user 0m1.385s 00:41:43.278 sys 0m0.287s 00:41:43.278 11:53:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:43.278 11:53:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:43.278 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:43.537 ************************************ 00:41:43.537 START TEST dd_copy_to_out_bdev 00:41:43.537 ************************************ 00:41:43.537 11:53:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:41:43.537 { 00:41:43.537 "subsystems": [ 00:41:43.537 { 00:41:43.538 "subsystem": "bdev", 00:41:43.538 "config": [ 00:41:43.538 { 00:41:43.538 "params": { 00:41:43.538 "block_size": 4096, 00:41:43.538 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:43.538 "name": "aio1" 00:41:43.538 }, 00:41:43.538 "method": "bdev_aio_create" 00:41:43.538 }, 00:41:43.538 { 00:41:43.538 "params": { 00:41:43.538 "trtype": "pcie", 00:41:43.538 "traddr": "0000:00:10.0", 00:41:43.538 "name": "Nvme0" 00:41:43.538 }, 00:41:43.538 "method": "bdev_nvme_attach_controller" 00:41:43.538 }, 00:41:43.538 { 00:41:43.538 "method": "bdev_wait_for_examine" 00:41:43.538 } 00:41:43.538 ] 00:41:43.538 } 00:41:43.538 ] 00:41:43.538 } 00:41:43.538 [2024-07-13 11:53:18.097616] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:43.538 [2024-07-13 11:53:18.098290] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169201 ] 00:41:43.538 [2024-07-13 11:53:18.271525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:43.796 [2024-07-13 11:53:18.479130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.810  Copying: 50/64 [MB] (50 MBps) Copying: 64/64 [MB] (average 50 MBps) 00:41:46.811 00:41:46.811 ************************************ 00:41:46.811 END TEST dd_copy_to_out_bdev 00:41:46.811 ************************************ 00:41:46.811 00:41:46.811 real 0m3.221s 00:41:46.811 user 0m2.781s 00:41:46.811 sys 0m0.340s 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:46.811 ************************************ 00:41:46.811 START TEST dd_offset_magic 00:41:46.811 ************************************ 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:41:46.811 11:53:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:46.811 [2024-07-13 11:53:21.364524] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:46.811 [2024-07-13 11:53:21.364689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169281 ] 00:41:46.811 { 00:41:46.811 "subsystems": [ 00:41:46.811 { 00:41:46.811 "subsystem": "bdev", 00:41:46.811 "config": [ 00:41:46.811 { 00:41:46.811 "params": { 00:41:46.811 "block_size": 4096, 00:41:46.811 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:46.811 "name": "aio1" 00:41:46.811 }, 00:41:46.811 "method": "bdev_aio_create" 00:41:46.811 }, 00:41:46.811 { 00:41:46.811 "params": { 00:41:46.811 "trtype": "pcie", 00:41:46.811 "traddr": "0000:00:10.0", 00:41:46.811 "name": "Nvme0" 00:41:46.811 }, 00:41:46.811 "method": "bdev_nvme_attach_controller" 00:41:46.811 }, 00:41:46.811 { 00:41:46.811 "method": "bdev_wait_for_examine" 00:41:46.811 } 00:41:46.811 ] 00:41:46.811 } 00:41:46.811 ] 00:41:46.811 } 00:41:46.811 [2024-07-13 11:53:21.520735] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:47.070 [2024-07-13 11:53:21.723115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:48.937  Copying: 65/65 [MB] (average 203 MBps) 00:41:48.937 00:41:48.937 11:53:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:41:48.937 11:53:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:41:48.937 11:53:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:41:48.937 11:53:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:48.937 [2024-07-13 11:53:23.669716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:48.937 [2024-07-13 11:53:23.669937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169319 ] 00:41:48.937 { 00:41:48.937 "subsystems": [ 00:41:48.937 { 00:41:48.937 "subsystem": "bdev", 00:41:48.937 "config": [ 00:41:48.937 { 00:41:48.937 "params": { 00:41:48.937 "block_size": 4096, 00:41:48.937 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:48.937 "name": "aio1" 00:41:48.937 }, 00:41:48.937 "method": "bdev_aio_create" 00:41:48.937 }, 00:41:48.937 { 00:41:48.937 "params": { 00:41:48.937 "trtype": "pcie", 00:41:48.937 "traddr": "0000:00:10.0", 00:41:48.937 "name": "Nvme0" 00:41:48.937 }, 00:41:48.937 "method": "bdev_nvme_attach_controller" 00:41:48.937 }, 00:41:48.937 { 00:41:48.937 "method": "bdev_wait_for_examine" 00:41:48.937 } 00:41:48.937 ] 00:41:48.937 } 00:41:48.937 ] 00:41:48.937 } 00:41:49.195 [2024-07-13 11:53:23.835223] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:49.453 [2024-07-13 11:53:24.003507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:50.665  Copying: 1024/1024 [kB] (average 1000 MBps) 00:41:50.665 00:41:50.665 11:53:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:41:50.665 11:53:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:41:50.665 11:53:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:41:50.665 11:53:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:41:50.665 11:53:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:41:50.665 11:53:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:41:50.665 11:53:25 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:50.941 [2024-07-13 11:53:25.424602] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:50.941 [2024-07-13 11:53:25.424808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169348 ] 00:41:50.941 { 00:41:50.941 "subsystems": [ 00:41:50.941 { 00:41:50.941 "subsystem": "bdev", 00:41:50.941 "config": [ 00:41:50.941 { 00:41:50.941 "params": { 00:41:50.941 "block_size": 4096, 00:41:50.941 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:50.941 "name": "aio1" 00:41:50.941 }, 00:41:50.941 "method": "bdev_aio_create" 00:41:50.941 }, 00:41:50.941 { 00:41:50.941 "params": { 00:41:50.941 "trtype": "pcie", 00:41:50.941 "traddr": "0000:00:10.0", 00:41:50.941 "name": "Nvme0" 00:41:50.941 }, 00:41:50.941 "method": "bdev_nvme_attach_controller" 00:41:50.941 }, 00:41:50.941 { 00:41:50.941 "method": "bdev_wait_for_examine" 00:41:50.941 } 00:41:50.941 ] 00:41:50.941 } 00:41:50.941 ] 00:41:50.941 } 00:41:50.941 [2024-07-13 11:53:25.594999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:51.210 [2024-07-13 11:53:25.765086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:52.710  Copying: 65/65 [MB] (average 320 MBps) 00:41:52.710 00:41:52.710 11:53:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:41:52.710 11:53:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:41:52.710 11:53:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:41:52.710 11:53:27 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:52.710 [2024-07-13 11:53:27.279631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:52.710 [2024-07-13 11:53:27.280469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169377 ] 00:41:52.710 { 00:41:52.710 "subsystems": [ 00:41:52.710 { 00:41:52.710 "subsystem": "bdev", 00:41:52.710 "config": [ 00:41:52.710 { 00:41:52.710 "params": { 00:41:52.710 "block_size": 4096, 00:41:52.710 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:52.710 "name": "aio1" 00:41:52.710 }, 00:41:52.710 "method": "bdev_aio_create" 00:41:52.710 }, 00:41:52.710 { 00:41:52.710 "params": { 00:41:52.710 "trtype": "pcie", 00:41:52.710 "traddr": "0000:00:10.0", 00:41:52.710 "name": "Nvme0" 00:41:52.710 }, 00:41:52.710 "method": "bdev_nvme_attach_controller" 00:41:52.710 }, 00:41:52.710 { 00:41:52.710 "method": "bdev_wait_for_examine" 00:41:52.710 } 00:41:52.710 ] 00:41:52.710 } 00:41:52.710 ] 00:41:52.710 } 00:41:52.710 [2024-07-13 11:53:27.450459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:52.968 [2024-07-13 11:53:27.619818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.470  Copying: 1024/1024 [kB] (average 1000 MBps) 00:41:54.470 00:41:54.470 11:53:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:41:54.470 11:53:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:41:54.470 ************************************ 00:41:54.470 END TEST dd_offset_magic 00:41:54.470 ************************************ 00:41:54.470 00:41:54.470 real 0m7.692s 00:41:54.470 user 0m5.762s 00:41:54.470 sys 0m1.139s 00:41:54.470 11:53:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:54.470 11:53:28 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:41:54.470 11:53:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:54.470 [2024-07-13 11:53:29.087956] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:54.470 [2024-07-13 11:53:29.088121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169437 ] 00:41:54.470 { 00:41:54.470 "subsystems": [ 00:41:54.470 { 00:41:54.470 "subsystem": "bdev", 00:41:54.470 "config": [ 00:41:54.470 { 00:41:54.470 "params": { 00:41:54.470 "block_size": 4096, 00:41:54.470 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:54.470 "name": "aio1" 00:41:54.470 }, 00:41:54.470 "method": "bdev_aio_create" 00:41:54.470 }, 00:41:54.470 { 00:41:54.470 "params": { 00:41:54.470 "trtype": "pcie", 00:41:54.470 "traddr": "0000:00:10.0", 00:41:54.470 "name": "Nvme0" 00:41:54.470 }, 00:41:54.470 "method": "bdev_nvme_attach_controller" 00:41:54.470 }, 00:41:54.470 { 00:41:54.470 "method": "bdev_wait_for_examine" 00:41:54.470 } 00:41:54.470 ] 00:41:54.470 } 00:41:54.470 ] 00:41:54.470 } 00:41:54.728 [2024-07-13 11:53:29.239485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:54.728 [2024-07-13 11:53:29.411704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:56.231  Copying: 5120/5120 [kB] (average 1250 MBps) 00:41:56.231 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:41:56.231 11:53:30 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:56.231 [2024-07-13 11:53:30.771749] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:56.231 [2024-07-13 11:53:30.771965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169466 ] 00:41:56.231 { 00:41:56.231 "subsystems": [ 00:41:56.231 { 00:41:56.231 "subsystem": "bdev", 00:41:56.231 "config": [ 00:41:56.231 { 00:41:56.231 "params": { 00:41:56.231 "block_size": 4096, 00:41:56.231 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:41:56.231 "name": "aio1" 00:41:56.231 }, 00:41:56.231 "method": "bdev_aio_create" 00:41:56.231 }, 00:41:56.231 { 00:41:56.231 "params": { 00:41:56.231 "trtype": "pcie", 00:41:56.231 "traddr": "0000:00:10.0", 00:41:56.231 "name": "Nvme0" 00:41:56.231 }, 00:41:56.231 "method": "bdev_nvme_attach_controller" 00:41:56.231 }, 00:41:56.231 { 00:41:56.231 "method": "bdev_wait_for_examine" 00:41:56.231 } 00:41:56.231 ] 00:41:56.231 } 00:41:56.231 ] 00:41:56.231 } 00:41:56.231 [2024-07-13 11:53:30.943077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:56.490 [2024-07-13 11:53:31.107567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:58.122  Copying: 5120/5120 [kB] (average 357 MBps) 00:41:58.122 00:41:58.122 11:53:32 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:41:58.122 00:41:58.122 real 0m18.527s 00:41:58.122 user 0m14.203s 00:41:58.122 sys 0m2.879s 00:41:58.122 11:53:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:41:58.122 11:53:32 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:41:58.122 ************************************ 00:41:58.122 END TEST spdk_dd_bdev_to_bdev 00:41:58.122 ************************************ 00:41:58.122 11:53:32 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:41:58.122 11:53:32 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:41:58.122 11:53:32 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:41:58.122 11:53:32 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:58.122 11:53:32 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:58.122 11:53:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:41:58.122 ************************************ 00:41:58.122 START TEST spdk_dd_sparse 00:41:58.122 ************************************ 00:41:58.122 11:53:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:41:58.122 * Looking for test storage... 00:41:58.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:41:58.122 11:53:32 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:41:58.122 11:53:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:58.122 11:53:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:58.122 11:53:32 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:58.122 11:53:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:41:58.123 1+0 records in 00:41:58.123 1+0 records out 00:41:58.123 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00814186 s, 515 MB/s 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:41:58.123 1+0 records in 00:41:58.123 1+0 records out 00:41:58.123 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00872565 s, 481 MB/s 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:41:58.123 1+0 records in 00:41:58.123 1+0 records out 00:41:58.123 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00851684 s, 492 MB/s 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:41:58.123 ************************************ 00:41:58.123 START TEST dd_sparse_file_to_file 00:41:58.123 ************************************ 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:41:58.123 11:53:32 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:41:58.381 { 00:41:58.381 "subsystems": [ 00:41:58.381 { 00:41:58.381 "subsystem": "bdev", 00:41:58.381 "config": [ 00:41:58.381 { 00:41:58.381 "params": { 00:41:58.381 "block_size": 4096, 00:41:58.381 "filename": "dd_sparse_aio_disk", 00:41:58.381 "name": "dd_aio" 00:41:58.381 }, 00:41:58.381 "method": "bdev_aio_create" 00:41:58.381 }, 00:41:58.381 { 00:41:58.381 "params": { 00:41:58.381 "lvs_name": "dd_lvstore", 00:41:58.381 "bdev_name": "dd_aio" 00:41:58.381 }, 00:41:58.381 "method": "bdev_lvol_create_lvstore" 00:41:58.381 }, 00:41:58.381 { 00:41:58.381 "method": "bdev_wait_for_examine" 00:41:58.381 } 00:41:58.381 ] 00:41:58.381 } 00:41:58.381 ] 00:41:58.381 } 00:41:58.381 [2024-07-13 11:53:32.886696] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:41:58.381 [2024-07-13 11:53:32.887560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169550 ] 00:41:58.381 [2024-07-13 11:53:33.055992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:58.639 [2024-07-13 11:53:33.240414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:00.275  Copying: 12/36 [MB] (average 1000 MBps) 00:42:00.275 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:42:00.275 00:42:00.275 real 0m1.966s 00:42:00.275 user 0m1.512s 00:42:00.275 sys 0m0.311s 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:42:00.275 ************************************ 00:42:00.275 END TEST dd_sparse_file_to_file 00:42:00.275 ************************************ 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:42:00.275 ************************************ 00:42:00.275 START TEST dd_sparse_file_to_bdev 00:42:00.275 ************************************ 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size_in_mib"]=36 ["thin_provision"]=true) 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:42:00.275 11:53:34 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:00.275 { 00:42:00.275 "subsystems": [ 00:42:00.275 { 00:42:00.275 "subsystem": "bdev", 00:42:00.275 "config": [ 00:42:00.275 { 00:42:00.275 "params": { 00:42:00.275 "block_size": 4096, 00:42:00.275 "filename": "dd_sparse_aio_disk", 00:42:00.275 "name": "dd_aio" 00:42:00.275 }, 00:42:00.275 "method": "bdev_aio_create" 00:42:00.275 }, 00:42:00.275 { 00:42:00.275 "params": { 00:42:00.275 "size_in_mib": 36, 00:42:00.275 "lvs_name": "dd_lvstore", 00:42:00.275 "thin_provision": true, 00:42:00.275 "lvol_name": "dd_lvol" 00:42:00.275 }, 00:42:00.275 "method": "bdev_lvol_create" 00:42:00.275 }, 00:42:00.275 { 00:42:00.275 "method": "bdev_wait_for_examine" 00:42:00.275 } 00:42:00.275 ] 00:42:00.275 } 00:42:00.275 ] 00:42:00.275 } 00:42:00.275 [2024-07-13 11:53:34.904348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:00.275 [2024-07-13 11:53:34.904667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169610 ] 00:42:00.534 [2024-07-13 11:53:35.068687] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:00.534 [2024-07-13 11:53:35.255928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:02.038  Copying: 12/36 [MB] (average 521 MBps) 00:42:02.038 00:42:02.038 00:42:02.038 real 0m1.907s 00:42:02.038 user 0m1.454s 00:42:02.038 sys 0m0.350s 00:42:02.038 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:02.038 ************************************ 00:42:02.038 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:42:02.038 END TEST dd_sparse_file_to_bdev 00:42:02.038 ************************************ 00:42:02.038 11:53:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:42:02.038 11:53:36 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:42:02.038 11:53:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:02.038 11:53:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:02.038 11:53:36 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:42:02.297 ************************************ 00:42:02.297 START TEST dd_sparse_bdev_to_file 00:42:02.297 ************************************ 00:42:02.297 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:42:02.297 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:42:02.297 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:42:02.297 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:42:02.297 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:42:02.297 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:42:02.297 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:42:02.297 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:42:02.297 11:53:36 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:42:02.297 { 00:42:02.297 "subsystems": [ 00:42:02.297 { 00:42:02.297 "subsystem": "bdev", 00:42:02.297 "config": [ 00:42:02.297 { 00:42:02.297 "params": { 00:42:02.297 "block_size": 4096, 00:42:02.297 "filename": "dd_sparse_aio_disk", 00:42:02.297 "name": "dd_aio" 00:42:02.297 }, 00:42:02.297 "method": "bdev_aio_create" 00:42:02.297 }, 00:42:02.297 { 00:42:02.297 "method": "bdev_wait_for_examine" 00:42:02.297 } 00:42:02.297 ] 00:42:02.297 } 00:42:02.297 ] 00:42:02.297 } 00:42:02.297 [2024-07-13 11:53:36.864609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:02.297 [2024-07-13 11:53:36.864838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169668 ] 00:42:02.297 [2024-07-13 11:53:37.034893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:02.556 [2024-07-13 11:53:37.216448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:04.192  Copying: 12/36 [MB] (average 923 MBps) 00:42:04.193 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:42:04.193 00:42:04.193 real 0m1.902s 00:42:04.193 user 0m1.500s 00:42:04.193 sys 0m0.310s 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:04.193 ************************************ 00:42:04.193 END TEST dd_sparse_bdev_to_file 00:42:04.193 ************************************ 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:42:04.193 ************************************ 00:42:04.193 END TEST spdk_dd_sparse 00:42:04.193 ************************************ 00:42:04.193 00:42:04.193 real 0m6.091s 00:42:04.193 user 0m4.632s 00:42:04.193 sys 0m1.115s 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:04.193 11:53:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:42:04.193 11:53:38 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:42:04.193 11:53:38 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:42:04.193 11:53:38 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:04.193 11:53:38 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:04.193 11:53:38 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:04.193 ************************************ 00:42:04.193 START TEST spdk_dd_negative 00:42:04.193 ************************************ 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:42:04.193 * Looking for test storage... 00:42:04.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:04.193 ************************************ 00:42:04.193 START TEST dd_invalid_arguments 00:42:04.193 ************************************ 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:04.193 11:53:38 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:42:04.453 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:42:04.453 00:42:04.453 CPU options: 00:42:04.453 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:42:04.453 (like [0,1,10]) 00:42:04.453 --lcores lcore to CPU mapping list. The list is in the format: 00:42:04.453 [<,lcores[@CPUs]>...] 00:42:04.453 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:42:04.453 Within the group, '-' is used for range separator, 00:42:04.453 ',' is used for single number separator. 00:42:04.453 '( )' can be omitted for single element group, 00:42:04.453 '@' can be omitted if cpus and lcores have the same value 00:42:04.453 --disable-cpumask-locks Disable CPU core lock files. 00:42:04.453 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:42:04.453 pollers in the app support interrupt mode) 00:42:04.453 -p, --main-core main (primary) core for DPDK 00:42:04.453 00:42:04.453 Configuration options: 00:42:04.453 -c, --config, --json JSON config file 00:42:04.453 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:42:04.453 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:42:04.453 --wait-for-rpc wait for RPCs to initialize subsystems 00:42:04.453 --rpcs-allowed comma-separated list of permitted RPCS 00:42:04.453 --json-ignore-init-errors don't exit on invalid config entry 00:42:04.453 00:42:04.453 Memory options: 00:42:04.453 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:42:04.453 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:42:04.453 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:42:04.453 -R, --huge-unlink unlink huge files after initialization 00:42:04.453 -n, --mem-channels number of memory channels used for DPDK 00:42:04.453 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:42:04.453 --msg-mempool-size global message memory pool size in count (default: 262143) 00:42:04.453 --no-huge run without using hugepages 00:42:04.453 -i, --shm-id shared memory ID (optional) 00:42:04.453 -g, --single-file-segments force creating just one hugetlbfs file 00:42:04.453 00:42:04.453 PCI options: 00:42:04.453 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:42:04.453 -B, --pci-blocked pci addr to block (can be used more than once) 00:42:04.453 -u, --no-pci disable PCI access 00:42:04.453 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:42:04.453 00:42:04.453 Log options: 00:42:04.453 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:42:04.453 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:42:04.453 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:42:04.453 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:42:04.453 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:42:04.453 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:42:04.453 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:42:04.453 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:42:04.453 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:42:04.453 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:42:04.453 virtio_vfio_user, vmd) 00:42:04.453 --silence-noticelog disable notice level logging to stderr 00:42:04.453 00:42:04.453 Trace options: 00:42:04.453 --num-trace-entries number of trace entries for each core, must be power of 2, 00:42:04.453 setting 0 to disable trace (default 32768) 00:42:04.453 Tracepoints vary in size and can use more than one trace entry. 00:42:04.453 -e, --tpoint-group [:] 00:42:04.453 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:42:04.453 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:42:04.453 [2024-07-13 11:53:39.006690] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:42:04.453 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:42:04.453 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:42:04.453 a tracepoint group. First tpoint inside a group can be enabled by 00:42:04.453 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:42:04.453 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:42:04.453 in /include/spdk_internal/trace_defs.h 00:42:04.453 00:42:04.453 Other options: 00:42:04.453 -h, --help show this usage 00:42:04.453 -v, --version print SPDK version 00:42:04.453 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:42:04.453 --env-context Opaque context for use of the env implementation 00:42:04.453 00:42:04.453 Application specific: 00:42:04.453 [--------- DD Options ---------] 00:42:04.453 --if Input file. Must specify either --if or --ib. 00:42:04.453 --ib Input bdev. Must specifier either --if or --ib 00:42:04.453 --of Output file. Must specify either --of or --ob. 00:42:04.453 --ob Output bdev. Must specify either --of or --ob. 00:42:04.453 --iflag Input file flags. 00:42:04.453 --oflag Output file flags. 00:42:04.453 --bs I/O unit size (default: 4096) 00:42:04.453 --qd Queue depth (default: 2) 00:42:04.453 --count I/O unit count. The number of I/O units to copy. (default: all) 00:42:04.453 --skip Skip this many I/O units at start of input. (default: 0) 00:42:04.453 --seek Skip this many I/O units at start of output. (default: 0) 00:42:04.453 --aio Force usage of AIO. (by default io_uring is used if available) 00:42:04.453 --sparse Enable hole skipping in input target 00:42:04.453 Available iflag and oflag values: 00:42:04.453 append - append mode 00:42:04.453 direct - use direct I/O for data 00:42:04.453 directory - fail unless a directory 00:42:04.453 dsync - use synchronized I/O for data 00:42:04.453 noatime - do not update access time 00:42:04.453 noctty - do not assign controlling terminal from file 00:42:04.453 nofollow - do not follow symlinks 00:42:04.453 nonblock - use non-blocking I/O 00:42:04.453 sync - use synchronized I/O for data and metadata 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:04.453 00:42:04.453 real 0m0.115s 00:42:04.453 user 0m0.063s 00:42:04.453 sys 0m0.051s 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:42:04.453 ************************************ 00:42:04.453 END TEST dd_invalid_arguments 00:42:04.453 ************************************ 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:04.453 ************************************ 00:42:04.453 START TEST dd_double_input 00:42:04.453 ************************************ 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.453 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.454 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.454 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.454 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:04.454 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:42:04.454 [2024-07-13 11:53:39.177939] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:42:04.713 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:42:04.713 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:04.713 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:04.713 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:04.713 00:42:04.713 real 0m0.120s 00:42:04.713 user 0m0.065s 00:42:04.713 sys 0m0.054s 00:42:04.713 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:04.713 11:53:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:42:04.713 ************************************ 00:42:04.714 END TEST dd_double_input 00:42:04.714 ************************************ 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:04.714 ************************************ 00:42:04.714 START TEST dd_double_output 00:42:04.714 ************************************ 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:42:04.714 [2024-07-13 11:53:39.346534] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:04.714 00:42:04.714 real 0m0.120s 00:42:04.714 user 0m0.058s 00:42:04.714 sys 0m0.059s 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:42:04.714 ************************************ 00:42:04.714 END TEST dd_double_output 00:42:04.714 ************************************ 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:04.714 ************************************ 00:42:04.714 START TEST dd_no_input 00:42:04.714 ************************************ 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:04.714 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:42:04.973 [2024-07-13 11:53:39.533846] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:04.973 00:42:04.973 real 0m0.123s 00:42:04.973 user 0m0.059s 00:42:04.973 sys 0m0.062s 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:42:04.973 ************************************ 00:42:04.973 END TEST dd_no_input 00:42:04.973 ************************************ 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:04.973 ************************************ 00:42:04.973 START TEST dd_no_output 00:42:04.973 ************************************ 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:04.973 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:42:04.973 [2024-07-13 11:53:39.702448] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:05.232 00:42:05.232 real 0m0.116s 00:42:05.232 user 0m0.055s 00:42:05.232 sys 0m0.059s 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:42:05.232 ************************************ 00:42:05.232 END TEST dd_no_output 00:42:05.232 ************************************ 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:05.232 ************************************ 00:42:05.232 START TEST dd_wrong_blocksize 00:42:05.232 ************************************ 00:42:05.232 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:42:05.233 [2024-07-13 11:53:39.875140] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:05.233 00:42:05.233 real 0m0.116s 00:42:05.233 user 0m0.057s 00:42:05.233 sys 0m0.056s 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:42:05.233 ************************************ 00:42:05.233 END TEST dd_wrong_blocksize 00:42:05.233 ************************************ 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:05.233 ************************************ 00:42:05.233 START TEST dd_smaller_blocksize 00:42:05.233 ************************************ 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:05.233 11:53:39 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:42:05.492 [2024-07-13 11:53:40.039155] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:05.492 [2024-07-13 11:53:40.039555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169954 ] 00:42:05.492 [2024-07-13 11:53:40.211255] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:05.751 [2024-07-13 11:53:40.473234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:06.319 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:42:06.578 [2024-07-13 11:53:41.106202] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:42:06.578 [2024-07-13 11:53:41.106565] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:07.146 [2024-07-13 11:53:41.752112] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:07.405 ************************************ 00:42:07.405 END TEST dd_smaller_blocksize 00:42:07.405 ************************************ 00:42:07.405 11:53:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:42:07.405 11:53:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:07.405 11:53:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:42:07.405 11:53:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:42:07.405 11:53:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:42:07.405 11:53:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:07.405 00:42:07.405 real 0m2.148s 00:42:07.405 user 0m1.484s 00:42:07.405 sys 0m0.557s 00:42:07.405 11:53:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:07.405 11:53:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:42:07.663 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:07.664 ************************************ 00:42:07.664 START TEST dd_invalid_count 00:42:07.664 ************************************ 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:42:07.664 [2024-07-13 11:53:42.226670] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:07.664 00:42:07.664 real 0m0.093s 00:42:07.664 user 0m0.043s 00:42:07.664 sys 0m0.048s 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:42:07.664 ************************************ 00:42:07.664 END TEST dd_invalid_count 00:42:07.664 ************************************ 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:07.664 ************************************ 00:42:07.664 START TEST dd_invalid_oflag 00:42:07.664 ************************************ 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:07.664 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:42:07.664 [2024-07-13 11:53:42.389259] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:07.923 00:42:07.923 real 0m0.115s 00:42:07.923 user 0m0.057s 00:42:07.923 sys 0m0.055s 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:42:07.923 ************************************ 00:42:07.923 END TEST dd_invalid_oflag 00:42:07.923 ************************************ 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:07.923 ************************************ 00:42:07.923 START TEST dd_invalid_iflag 00:42:07.923 ************************************ 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:42:07.923 [2024-07-13 11:53:42.557724] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:07.923 00:42:07.923 real 0m0.120s 00:42:07.923 user 0m0.077s 00:42:07.923 sys 0m0.043s 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:42:07.923 ************************************ 00:42:07.923 END TEST dd_invalid_iflag 00:42:07.923 ************************************ 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:07.923 ************************************ 00:42:07.923 START TEST dd_unknown_flag 00:42:07.923 ************************************ 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:07.923 11:53:42 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:42:08.181 [2024-07-13 11:53:42.723113] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:08.181 [2024-07-13 11:53:42.723514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170087 ] 00:42:08.181 [2024-07-13 11:53:42.891923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:08.438 [2024-07-13 11:53:43.076268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:08.696 [2024-07-13 11:53:43.357971] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:42:08.696 [2024-07-13 11:53:43.358384] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:08.696  Copying: 0/0 [B] (average 0 Bps)[2024-07-13 11:53:43.358646] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:42:09.262 [2024-07-13 11:53:43.986705] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:09.837 00:42:09.837 00:42:09.837 ************************************ 00:42:09.837 END TEST dd_unknown_flag 00:42:09.837 ************************************ 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:09.837 00:42:09.837 real 0m1.742s 00:42:09.837 user 0m1.325s 00:42:09.837 sys 0m0.282s 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:09.837 ************************************ 00:42:09.837 START TEST dd_invalid_json 00:42:09.837 ************************************ 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:42:09.837 11:53:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:42:09.837 [2024-07-13 11:53:44.522164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:09.837 [2024-07-13 11:53:44.523237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170128 ] 00:42:10.095 [2024-07-13 11:53:44.692291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:10.353 [2024-07-13 11:53:44.875731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:10.353 [2024-07-13 11:53:44.876068] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:42:10.353 [2024-07-13 11:53:44.876208] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:10.353 [2024-07-13 11:53:44.876329] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:10.353 [2024-07-13 11:53:44.876554] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:42:10.611 ************************************ 00:42:10.611 END TEST dd_invalid_json 00:42:10.611 ************************************ 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:42:10.611 00:42:10.611 real 0m0.766s 00:42:10.611 user 0m0.495s 00:42:10.611 sys 0m0.167s 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:42:10.611 00:42:10.611 real 0m6.420s 00:42:10.611 user 0m4.229s 00:42:10.611 sys 0m1.765s 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:10.611 11:53:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:42:10.611 ************************************ 00:42:10.611 END TEST spdk_dd_negative 00:42:10.611 ************************************ 00:42:10.611 11:53:45 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:42:10.611 00:42:10.611 real 2m28.975s 00:42:10.611 user 1m54.273s 00:42:10.611 sys 0m24.708s 00:42:10.611 11:53:45 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:10.611 11:53:45 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:42:10.611 ************************************ 00:42:10.611 END TEST spdk_dd 00:42:10.612 ************************************ 00:42:10.612 11:53:45 -- common/autotest_common.sh@1142 -- # return 0 00:42:10.612 11:53:45 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:42:10.612 11:53:45 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:42:10.612 11:53:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:10.612 11:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:10.612 11:53:45 -- common/autotest_common.sh@10 -- # set +x 00:42:10.612 ************************************ 00:42:10.612 START TEST blockdev_nvme 00:42:10.612 ************************************ 00:42:10.612 11:53:45 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:42:10.870 * Looking for test storage... 00:42:10.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:10.870 11:53:45 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:42:10.870 11:53:45 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=170218 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 170218 00:42:10.871 11:53:45 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:42:10.871 11:53:45 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 170218 ']' 00:42:10.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:10.871 11:53:45 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:10.871 11:53:45 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:10.871 11:53:45 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:10.871 11:53:45 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:10.871 11:53:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:10.871 [2024-07-13 11:53:45.502723] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:10.871 [2024-07-13 11:53:45.502925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170218 ] 00:42:11.130 [2024-07-13 11:53:45.663599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:11.130 [2024-07-13 11:53:45.845272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:12.065 11:53:46 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:12.065 11:53:46 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:42:12.065 11:53:46 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:42:12.065 11:53:46 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:42:12.065 11:53:46 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:42:12.065 11:53:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:42:12.065 11:53:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:12.065 11:53:46 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:42:12.065 11:53:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.065 11:53:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:12.065 11:53:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.065 11:53:46 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:42:12.065 11:53:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.065 11:53:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:12.065 11:53:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:42:12.066 11:53:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:42:12.066 11:53:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "74300321-e782-41eb-aba5-24b4f219abb5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "74300321-e782-41eb-aba5-24b4f219abb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:42:12.323 11:53:46 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:42:12.323 11:53:46 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:42:12.323 11:53:46 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:42:12.323 11:53:46 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 170218 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 170218 ']' 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 170218 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170218 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:12.323 killing process with pid 170218 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170218' 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 170218 00:42:12.323 11:53:46 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 170218 00:42:14.225 11:53:48 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:14.225 11:53:48 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:42:14.225 11:53:48 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:42:14.225 11:53:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:14.225 11:53:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:14.225 ************************************ 00:42:14.225 START TEST bdev_hello_world 00:42:14.225 ************************************ 00:42:14.225 11:53:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:42:14.225 [2024-07-13 11:53:48.876692] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:14.225 [2024-07-13 11:53:48.876976] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170311 ] 00:42:14.483 [2024-07-13 11:53:49.047894] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:14.742 [2024-07-13 11:53:49.240013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.000 [2024-07-13 11:53:49.654836] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:42:15.000 [2024-07-13 11:53:49.654959] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:42:15.000 [2024-07-13 11:53:49.655000] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:42:15.000 [2024-07-13 11:53:49.657726] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:42:15.000 [2024-07-13 11:53:49.658311] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:42:15.000 [2024-07-13 11:53:49.658374] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:42:15.000 [2024-07-13 11:53:49.658670] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:42:15.000 00:42:15.000 [2024-07-13 11:53:49.658726] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:42:15.934 00:42:15.934 real 0m1.777s 00:42:15.934 user 0m1.361s 00:42:15.934 sys 0m0.308s 00:42:15.934 ************************************ 00:42:15.934 END TEST bdev_hello_world 00:42:15.934 ************************************ 00:42:15.934 11:53:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:15.934 11:53:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:42:15.934 11:53:50 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:42:15.934 11:53:50 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:42:15.935 11:53:50 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:15.935 11:53:50 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:15.935 11:53:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:15.935 ************************************ 00:42:15.935 START TEST bdev_bounds 00:42:15.935 ************************************ 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=170368 00:42:15.935 Process bdevio pid: 170368 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 170368' 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 170368 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 170368 ']' 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:15.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:15.935 11:53:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:16.193 [2024-07-13 11:53:50.718928] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:16.193 [2024-07-13 11:53:50.719143] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170368 ] 00:42:16.193 [2024-07-13 11:53:50.899577] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:16.451 [2024-07-13 11:53:51.061630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.451 [2024-07-13 11:53:51.061663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:16.451 [2024-07-13 11:53:51.061669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.041 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:17.041 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:42:17.041 11:53:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:42:17.041 I/O targets: 00:42:17.041 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:42:17.041 00:42:17.041 00:42:17.041 CUnit - A unit testing framework for C - Version 2.1-3 00:42:17.041 http://cunit.sourceforge.net/ 00:42:17.041 00:42:17.041 00:42:17.041 Suite: bdevio tests on: Nvme0n1 00:42:17.041 Test: blockdev write read block ...passed 00:42:17.041 Test: blockdev write zeroes read block ...passed 00:42:17.041 Test: blockdev write zeroes read no split ...passed 00:42:17.041 Test: blockdev write zeroes read split ...passed 00:42:17.041 Test: blockdev write zeroes read split partial ...passed 00:42:17.041 Test: blockdev reset ...[2024-07-13 11:53:51.776952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:42:17.041 [2024-07-13 11:53:51.780609] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:17.041 passed 00:42:17.041 Test: blockdev write read 8 blocks ...passed 00:42:17.041 Test: blockdev write read size > 128k ...passed 00:42:17.041 Test: blockdev write read invalid size ...passed 00:42:17.041 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:17.041 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:17.041 Test: blockdev write read max offset ...passed 00:42:17.041 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:17.041 Test: blockdev writev readv 8 blocks ...passed 00:42:17.041 Test: blockdev writev readv 30 x 1block ...passed 00:42:17.041 Test: blockdev writev readv block ...passed 00:42:17.041 Test: blockdev writev readv size > 128k ...passed 00:42:17.041 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:17.041 Test: blockdev comparev and writev ...[2024-07-13 11:53:51.791053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x9500d000 len:0x1000 00:42:17.041 [2024-07-13 11:53:51.791154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:17.041 passed 00:42:17.041 Test: blockdev nvme passthru rw ...passed 00:42:17.041 Test: blockdev nvme passthru vendor specific ...[2024-07-13 11:53:51.792052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:42:17.041 [2024-07-13 11:53:51.792114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:42:17.041 passed 00:42:17.299 Test: blockdev nvme admin passthru ...passed 00:42:17.299 Test: blockdev copy ...passed 00:42:17.299 00:42:17.299 Run Summary: Type Total Ran Passed Failed Inactive 00:42:17.299 suites 1 1 n/a 0 0 00:42:17.299 tests 23 23 23 0 0 00:42:17.299 asserts 152 152 152 0 n/a 00:42:17.299 00:42:17.299 Elapsed time = 0.179 seconds 00:42:17.299 0 00:42:17.299 11:53:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 170368 00:42:17.299 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 170368 ']' 00:42:17.299 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 170368 00:42:17.299 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:42:17.300 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:17.300 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170368 00:42:17.300 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:17.300 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:17.300 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170368' 00:42:17.300 killing process with pid 170368 00:42:17.300 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 170368 00:42:17.300 11:53:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 170368 00:42:18.234 11:53:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:42:18.234 00:42:18.234 real 0m2.119s 00:42:18.234 user 0m4.930s 00:42:18.234 sys 0m0.376s 00:42:18.234 ************************************ 00:42:18.234 END TEST bdev_bounds 00:42:18.234 ************************************ 00:42:18.234 11:53:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:18.234 11:53:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:18.234 11:53:52 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:42:18.234 11:53:52 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:42:18.234 11:53:52 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:42:18.234 11:53:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:18.234 11:53:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:18.234 ************************************ 00:42:18.234 START TEST bdev_nbd 00:42:18.234 ************************************ 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=170430 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 170430 /var/tmp/spdk-nbd.sock 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 170430 ']' 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:18.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:18.234 11:53:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:18.234 [2024-07-13 11:53:52.886722] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:18.234 [2024-07-13 11:53:52.887549] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:18.490 [2024-07-13 11:53:53.062277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:18.491 [2024-07-13 11:53:53.223320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:42:19.071 11:53:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:19.375 1+0 records in 00:42:19.375 1+0 records out 00:42:19.375 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594625 s, 6.9 MB/s 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:42:19.375 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:42:19.640 { 00:42:19.640 "nbd_device": "/dev/nbd0", 00:42:19.640 "bdev_name": "Nvme0n1" 00:42:19.640 } 00:42:19.640 ]' 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:42:19.640 { 00:42:19.640 "nbd_device": "/dev/nbd0", 00:42:19.640 "bdev_name": "Nvme0n1" 00:42:19.640 } 00:42:19.640 ]' 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:19.640 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:19.898 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:20.157 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:20.157 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:20.157 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:20.416 11:53:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:42:20.416 /dev/nbd0 00:42:20.416 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:20.675 1+0 records in 00:42:20.675 1+0 records out 00:42:20.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639385 s, 6.4 MB/s 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:20.675 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:20.933 { 00:42:20.933 "nbd_device": "/dev/nbd0", 00:42:20.933 "bdev_name": "Nvme0n1" 00:42:20.933 } 00:42:20.933 ]' 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:20.933 { 00:42:20.933 "nbd_device": "/dev/nbd0", 00:42:20.933 "bdev_name": "Nvme0n1" 00:42:20.933 } 00:42:20.933 ]' 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:42:20.933 256+0 records in 00:42:20.933 256+0 records out 00:42:20.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00606057 s, 173 MB/s 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:20.933 256+0 records in 00:42:20.933 256+0 records out 00:42:20.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0755351 s, 13.9 MB/s 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:20.933 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:21.192 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:21.192 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:21.192 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:21.192 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:21.192 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:21.192 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:21.192 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:42:21.450 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:42:21.450 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:21.450 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:21.450 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:21.450 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:21.450 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:21.450 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:21.450 11:53:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:42:21.709 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:42:21.967 malloc_lvol_verify 00:42:21.967 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:42:22.226 c1357e97-7794-4b8f-867e-d290b7a1297a 00:42:22.226 11:53:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:42:22.484 28c85e54-f969-4965-a894-005e6459282d 00:42:22.484 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:42:22.743 /dev/nbd0 00:42:22.743 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:42:22.743 mke2fs 1.45.5 (07-Jan-2020) 00:42:22.743 Creating filesystem with 1024 4k blocks and 1024 inodes 00:42:22.743 00:42:22.743 Filesystem too small for a journal 00:42:22.743 00:42:22.743 Allocating group tables: 0/1 done 00:42:22.743 Writing inode tables: 0/1 done 00:42:22.743 Writing superblocks and filesystem accounting information: 0/1 done 00:42:22.743 00:42:22.743 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:42:22.743 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:42:22.743 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:22.743 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:42:22.743 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:22.743 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:22.743 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:22.743 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 170430 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 170430 ']' 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 170430 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 170430 00:42:23.002 killing process with pid 170430 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 170430' 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@967 -- # kill 170430 00:42:23.002 11:53:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # wait 170430 00:42:23.938 ************************************ 00:42:23.938 END TEST bdev_nbd 00:42:23.938 ************************************ 00:42:23.938 11:53:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:42:23.938 00:42:23.938 real 0m5.790s 00:42:23.938 user 0m8.365s 00:42:23.938 sys 0m1.060s 00:42:23.938 11:53:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:23.938 11:53:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:23.938 11:53:58 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:42:23.938 skipping fio tests on NVMe due to multi-ns failures. 00:42:23.938 11:53:58 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:42:23.938 11:53:58 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:42:23.938 11:53:58 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:42:23.938 11:53:58 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:23.938 11:53:58 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:23.938 11:53:58 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:42:23.938 11:53:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:23.938 11:53:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:23.938 ************************************ 00:42:23.938 START TEST bdev_verify 00:42:23.938 ************************************ 00:42:23.938 11:53:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:24.197 [2024-07-13 11:53:58.737641] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:24.197 [2024-07-13 11:53:58.738086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170627 ] 00:42:24.197 [2024-07-13 11:53:58.914560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:24.455 [2024-07-13 11:53:59.088312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:24.455 [2024-07-13 11:53:59.088310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:25.022 Running I/O for 5 seconds... 00:42:30.288 00:42:30.288 Latency(us) 00:42:30.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:30.288 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:30.288 Verification LBA range: start 0x0 length 0xa0000 00:42:30.288 Nvme0n1 : 5.01 8675.77 33.89 0.00 0.00 14682.62 990.49 20852.36 00:42:30.288 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:30.288 Verification LBA range: start 0xa0000 length 0xa0000 00:42:30.288 Nvme0n1 : 5.01 9613.83 37.55 0.00 0.00 13246.07 249.48 21924.77 00:42:30.288 =================================================================================================================== 00:42:30.288 Total : 18289.60 71.44 0.00 0.00 13927.65 249.48 21924.77 00:42:30.854 ************************************ 00:42:30.854 END TEST bdev_verify 00:42:30.854 ************************************ 00:42:30.854 00:42:30.854 real 0m6.892s 00:42:30.854 user 0m12.632s 00:42:30.854 sys 0m0.275s 00:42:30.854 11:54:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:30.854 11:54:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:42:30.854 11:54:05 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:42:30.854 11:54:05 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:30.854 11:54:05 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:42:30.854 11:54:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:30.854 11:54:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:31.112 ************************************ 00:42:31.112 START TEST bdev_verify_big_io 00:42:31.112 ************************************ 00:42:31.112 11:54:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:31.112 [2024-07-13 11:54:05.674755] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:31.112 [2024-07-13 11:54:05.675161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170742 ] 00:42:31.112 [2024-07-13 11:54:05.849501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:31.370 [2024-07-13 11:54:06.029361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:31.370 [2024-07-13 11:54:06.029362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:31.936 Running I/O for 5 seconds... 00:42:37.199 00:42:37.199 Latency(us) 00:42:37.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:37.199 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:37.199 Verification LBA range: start 0x0 length 0xa000 00:42:37.199 Nvme0n1 : 5.05 1128.41 70.53 0.00 0.00 111186.47 785.69 122969.37 00:42:37.199 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:37.199 Verification LBA range: start 0xa000 length 0xa000 00:42:37.199 Nvme0n1 : 5.03 1263.27 78.95 0.00 0.00 99495.52 737.28 176351.42 00:42:37.199 =================================================================================================================== 00:42:37.199 Total : 2391.68 149.48 0.00 0.00 105020.42 737.28 176351.42 00:42:38.133 ************************************ 00:42:38.133 END TEST bdev_verify_big_io 00:42:38.133 ************************************ 00:42:38.133 00:42:38.133 real 0m7.245s 00:42:38.133 user 0m13.383s 00:42:38.133 sys 0m0.236s 00:42:38.133 11:54:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:38.133 11:54:12 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:42:38.391 11:54:12 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:42:38.391 11:54:12 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:38.391 11:54:12 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:42:38.391 11:54:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:38.391 11:54:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:38.391 ************************************ 00:42:38.391 START TEST bdev_write_zeroes 00:42:38.391 ************************************ 00:42:38.391 11:54:12 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:38.391 [2024-07-13 11:54:12.974654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:38.391 [2024-07-13 11:54:12.975088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170864 ] 00:42:38.648 [2024-07-13 11:54:13.144042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.648 [2024-07-13 11:54:13.317693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:39.212 Running I/O for 1 seconds... 00:42:40.144 00:42:40.144 Latency(us) 00:42:40.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:40.144 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:40.144 Nvme0n1 : 1.00 64278.41 251.09 0.00 0.00 1986.04 592.06 14000.87 00:42:40.144 =================================================================================================================== 00:42:40.144 Total : 64278.41 251.09 0.00 0.00 1986.04 592.06 14000.87 00:42:41.079 00:42:41.079 real 0m2.711s 00:42:41.079 user 0m2.419s 00:42:41.079 sys 0m0.192s 00:42:41.079 11:54:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:41.079 ************************************ 00:42:41.079 END TEST bdev_write_zeroes 00:42:41.079 ************************************ 00:42:41.079 11:54:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:42:41.079 11:54:15 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:42:41.079 11:54:15 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:41.079 11:54:15 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:42:41.079 11:54:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:41.079 11:54:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:41.079 ************************************ 00:42:41.079 START TEST bdev_json_nonenclosed 00:42:41.079 ************************************ 00:42:41.079 11:54:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:41.079 [2024-07-13 11:54:15.731894] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:41.079 [2024-07-13 11:54:15.732108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170922 ] 00:42:41.337 [2024-07-13 11:54:15.900702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:41.337 [2024-07-13 11:54:16.068040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:41.337 [2024-07-13 11:54:16.068166] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:42:41.337 [2024-07-13 11:54:16.068219] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:41.337 [2024-07-13 11:54:16.068244] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:41.903 00:42:41.903 real 0m0.721s 00:42:41.903 user 0m0.484s 00:42:41.903 sys 0m0.136s 00:42:41.903 11:54:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:42:41.903 ************************************ 00:42:41.903 END TEST bdev_json_nonenclosed 00:42:41.903 ************************************ 00:42:41.903 11:54:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:41.903 11:54:16 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:42:41.903 11:54:16 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:42:41.903 11:54:16 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:42:41.903 11:54:16 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:41.903 11:54:16 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:42:41.903 11:54:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:41.903 11:54:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:41.903 ************************************ 00:42:41.903 START TEST bdev_json_nonarray 00:42:41.903 ************************************ 00:42:41.903 11:54:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:41.903 [2024-07-13 11:54:16.512081] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:41.903 [2024-07-13 11:54:16.512301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid170954 ] 00:42:42.161 [2024-07-13 11:54:16.683679] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.161 [2024-07-13 11:54:16.865971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.161 [2024-07-13 11:54:16.866109] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:42:42.162 [2024-07-13 11:54:16.866159] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:42.162 [2024-07-13 11:54:16.866184] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:42.729 00:42:42.729 real 0m0.743s 00:42:42.729 user 0m0.490s 00:42:42.729 sys 0m0.152s 00:42:42.729 ************************************ 00:42:42.729 END TEST bdev_json_nonarray 00:42:42.729 ************************************ 00:42:42.729 11:54:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:42:42.729 11:54:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:42.729 11:54:17 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:42:42.729 11:54:17 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:42:42.729 11:54:17 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:42:42.729 00:42:42.729 real 0m31.894s 00:42:42.729 user 0m47.849s 00:42:42.729 sys 0m3.452s 00:42:42.729 11:54:17 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:42.729 ************************************ 00:42:42.729 11:54:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:42:42.729 END TEST blockdev_nvme 00:42:42.729 ************************************ 00:42:42.729 11:54:17 -- common/autotest_common.sh@1142 -- # return 0 00:42:42.729 11:54:17 -- spdk/autotest.sh@213 -- # uname -s 00:42:42.729 11:54:17 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:42:42.729 11:54:17 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:42:42.729 11:54:17 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:42.729 11:54:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:42.729 11:54:17 -- common/autotest_common.sh@10 -- # set +x 00:42:42.729 ************************************ 00:42:42.729 START TEST blockdev_nvme_gpt 00:42:42.729 ************************************ 00:42:42.729 11:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:42:42.729 * Looking for test storage... 00:42:42.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=171041 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 171041 00:42:42.729 11:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:42:42.729 11:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@829 -- # '[' -z 171041 ']' 00:42:42.729 11:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:42.729 11:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:42.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:42.729 11:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:42.729 11:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:42.729 11:54:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:42.729 [2024-07-13 11:54:17.436538] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:42.729 [2024-07-13 11:54:17.436713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171041 ] 00:42:42.988 [2024-07-13 11:54:17.588738] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:43.247 [2024-07-13 11:54:17.749723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:43.814 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:43.814 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # return 0 00:42:43.814 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:42:43.814 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:42:43.814 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:42:44.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:42:44.072 Waiting for block devices as requested 00:42:44.072 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:42:44.072 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:42:44.072 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:42:44.072 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:42:44.072 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:42:44.072 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:42:44.072 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:42:44.072 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:44.072 11:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:42:44.072 BYT; 00:42:44.072 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:42:44.072 BYT; 00:42:44.072 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:42:44.072 11:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:42:45.005 11:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:42:45.005 11:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:42:45.005 11:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:42:45.005 11:54:19 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:42:45.005 11:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:42:45.005 11:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:42:45.939 The operation has completed successfully. 00:42:45.939 11:54:20 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:42:47.317 The operation has completed successfully. 00:42:47.317 11:54:21 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:42:47.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:42:47.576 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:42:48.515 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:42:48.515 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.515 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:48.773 [] 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:42:48.773 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:42:48.773 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:42:48.774 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:42:49.031 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:42:49.031 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:42:49.031 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:42:49.031 11:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 171041 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@948 -- # '[' -z 171041 ']' 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # kill -0 171041 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # uname 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 171041 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:49.031 killing process with pid 171041 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 171041' 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@967 -- # kill 171041 00:42:49.031 11:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # wait 171041 00:42:50.933 11:54:25 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:50.933 11:54:25 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:42:50.933 11:54:25 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:42:50.933 11:54:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:50.933 11:54:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:50.933 ************************************ 00:42:50.933 START TEST bdev_hello_world 00:42:50.933 ************************************ 00:42:50.933 11:54:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:42:50.933 [2024-07-13 11:54:25.399933] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:50.933 [2024-07-13 11:54:25.400348] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171586 ] 00:42:50.933 [2024-07-13 11:54:25.563599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:51.191 [2024-07-13 11:54:25.733640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:51.450 [2024-07-13 11:54:26.119310] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:42:51.450 [2024-07-13 11:54:26.119417] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:42:51.450 [2024-07-13 11:54:26.119466] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:42:51.450 [2024-07-13 11:54:26.123229] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:42:51.450 [2024-07-13 11:54:26.123838] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:42:51.450 [2024-07-13 11:54:26.123900] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:42:51.450 [2024-07-13 11:54:26.124182] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:42:51.450 00:42:51.450 [2024-07-13 11:54:26.124260] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:42:52.384 00:42:52.384 real 0m1.733s 00:42:52.384 user 0m1.395s 00:42:52.384 sys 0m0.236s 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:52.384 ************************************ 00:42:52.384 END TEST bdev_hello_world 00:42:52.384 ************************************ 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:42:52.384 11:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:42:52.384 11:54:27 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:42:52.384 11:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:42:52.384 11:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:52.384 11:54:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:52.384 ************************************ 00:42:52.384 START TEST bdev_bounds 00:42:52.384 ************************************ 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=171624 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 171624' 00:42:52.384 Process bdevio pid: 171624 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 171624 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 171624 ']' 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:52.384 11:54:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:52.643 [2024-07-13 11:54:27.195724] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:52.643 [2024-07-13 11:54:27.195955] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171624 ] 00:42:52.643 [2024-07-13 11:54:27.377427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:42:52.929 [2024-07-13 11:54:27.538618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:52.929 [2024-07-13 11:54:27.538772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:52.929 [2024-07-13 11:54:27.538768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:42:53.549 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:53.549 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:42:53.549 11:54:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:42:53.549 I/O targets: 00:42:53.549 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:42:53.549 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:42:53.549 00:42:53.549 00:42:53.549 CUnit - A unit testing framework for C - Version 2.1-3 00:42:53.549 http://cunit.sourceforge.net/ 00:42:53.549 00:42:53.549 00:42:53.549 Suite: bdevio tests on: Nvme0n1p2 00:42:53.549 Test: blockdev write read block ...passed 00:42:53.550 Test: blockdev write zeroes read block ...passed 00:42:53.550 Test: blockdev write zeroes read no split ...passed 00:42:53.550 Test: blockdev write zeroes read split ...passed 00:42:53.550 Test: blockdev write zeroes read split partial ...passed 00:42:53.550 Test: blockdev reset ...[2024-07-13 11:54:28.191374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:42:53.550 [2024-07-13 11:54:28.194825] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:53.550 passed 00:42:53.550 Test: blockdev write read 8 blocks ...passed 00:42:53.550 Test: blockdev write read size > 128k ...passed 00:42:53.550 Test: blockdev write read invalid size ...passed 00:42:53.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:53.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:53.550 Test: blockdev write read max offset ...passed 00:42:53.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:53.550 Test: blockdev writev readv 8 blocks ...passed 00:42:53.550 Test: blockdev writev readv 30 x 1block ...passed 00:42:53.550 Test: blockdev writev readv block ...passed 00:42:53.550 Test: blockdev writev readv size > 128k ...passed 00:42:53.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:53.550 Test: blockdev comparev and writev ...[2024-07-13 11:54:28.204351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0xb7a0d000 len:0x1000 00:42:53.550 [2024-07-13 11:54:28.204812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:53.550 passed 00:42:53.550 Test: blockdev nvme passthru rw ...passed 00:42:53.550 Test: blockdev nvme passthru vendor specific ...passed 00:42:53.550 Test: blockdev nvme admin passthru ...passed 00:42:53.550 Test: blockdev copy ...passed 00:42:53.550 Suite: bdevio tests on: Nvme0n1p1 00:42:53.550 Test: blockdev write read block ...passed 00:42:53.550 Test: blockdev write zeroes read block ...passed 00:42:53.550 Test: blockdev write zeroes read no split ...passed 00:42:53.550 Test: blockdev write zeroes read split ...passed 00:42:53.550 Test: blockdev write zeroes read split partial ...passed 00:42:53.550 Test: blockdev reset ...[2024-07-13 11:54:28.249907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:42:53.550 [2024-07-13 11:54:28.253068] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:42:53.550 passed 00:42:53.550 Test: blockdev write read 8 blocks ...passed 00:42:53.550 Test: blockdev write read size > 128k ...passed 00:42:53.550 Test: blockdev write read invalid size ...passed 00:42:53.550 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:42:53.550 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:42:53.550 Test: blockdev write read max offset ...passed 00:42:53.550 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:42:53.550 Test: blockdev writev readv 8 blocks ...passed 00:42:53.550 Test: blockdev writev readv 30 x 1block ...passed 00:42:53.550 Test: blockdev writev readv block ...passed 00:42:53.550 Test: blockdev writev readv size > 128k ...passed 00:42:53.550 Test: blockdev writev readv size > 128k in two iovs ...passed 00:42:53.550 Test: blockdev comparev and writev ...[2024-07-13 11:54:28.261991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0xb7a09000 len:0x1000 00:42:53.550 [2024-07-13 11:54:28.262209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:42:53.550 passed 00:42:53.550 Test: blockdev nvme passthru rw ...passed 00:42:53.550 Test: blockdev nvme passthru vendor specific ...passed 00:42:53.550 Test: blockdev nvme admin passthru ...passed 00:42:53.550 Test: blockdev copy ...passed 00:42:53.550 00:42:53.550 Run Summary: Type Total Ran Passed Failed Inactive 00:42:53.550 suites 2 2 n/a 0 0 00:42:53.550 tests 46 46 46 0 0 00:42:53.550 asserts 284 284 284 0 n/a 00:42:53.550 00:42:53.550 Elapsed time = 0.322 seconds 00:42:53.550 0 00:42:53.550 11:54:28 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 171624 00:42:53.550 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 171624 ']' 00:42:53.550 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 171624 00:42:53.550 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:42:53.550 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:42:53.550 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 171624 00:42:53.821 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:42:53.821 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:42:53.821 killing process with pid 171624 00:42:53.821 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 171624' 00:42:53.821 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@967 -- # kill 171624 00:42:53.821 11:54:28 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # wait 171624 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:42:54.755 00:42:54.755 real 0m2.257s 00:42:54.755 user 0m5.197s 00:42:54.755 sys 0m0.393s 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:42:54.755 ************************************ 00:42:54.755 END TEST bdev_bounds 00:42:54.755 ************************************ 00:42:54.755 11:54:29 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:42:54.755 11:54:29 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:42:54.755 11:54:29 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:42:54.755 11:54:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:42:54.755 11:54:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:42:54.755 ************************************ 00:42:54.755 START TEST bdev_nbd 00:42:54.755 ************************************ 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=171694 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 171694 /var/tmp/spdk-nbd.sock 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 171694 ']' 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:42:54.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:42:54.755 11:54:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:42:54.755 [2024-07-13 11:54:29.504875] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:42:54.755 [2024-07-13 11:54:29.505085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:55.013 [2024-07-13 11:54:29.676838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:55.272 [2024-07-13 11:54:29.871473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:42:55.838 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:56.097 1+0 records in 00:42:56.097 1+0 records out 00:42:56.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039568 s, 10.4 MB/s 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:42:56.097 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:56.355 1+0 records in 00:42:56.355 1+0 records out 00:42:56.355 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000765538 s, 5.4 MB/s 00:42:56.355 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:56.355 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:42:56.355 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:56.355 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:42:56.355 11:54:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:42:56.355 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:42:56.355 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:42:56.355 11:54:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:42:56.355 { 00:42:56.355 "nbd_device": "/dev/nbd0", 00:42:56.355 "bdev_name": "Nvme0n1p1" 00:42:56.355 }, 00:42:56.355 { 00:42:56.355 "nbd_device": "/dev/nbd1", 00:42:56.355 "bdev_name": "Nvme0n1p2" 00:42:56.355 } 00:42:56.355 ]' 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:42:56.355 { 00:42:56.355 "nbd_device": "/dev/nbd0", 00:42:56.355 "bdev_name": "Nvme0n1p1" 00:42:56.355 }, 00:42:56.355 { 00:42:56.355 "nbd_device": "/dev/nbd1", 00:42:56.355 "bdev_name": "Nvme0n1p2" 00:42:56.355 } 00:42:56.355 ]' 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:56.355 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:56.614 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:56.614 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:56.614 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:56.614 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:56.614 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:56.614 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:56.614 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:56.871 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:57.129 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:57.387 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:57.387 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:57.387 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:57.387 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:57.388 11:54:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:42:57.646 /dev/nbd0 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:57.646 1+0 records in 00:42:57.646 1+0 records out 00:42:57.646 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000828708 s, 4.9 MB/s 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:57.646 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:42:57.905 /dev/nbd1 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:42:57.905 1+0 records in 00:42:57.905 1+0 records out 00:42:57.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555312 s, 7.4 MB/s 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:57.905 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:42:58.163 { 00:42:58.163 "nbd_device": "/dev/nbd0", 00:42:58.163 "bdev_name": "Nvme0n1p1" 00:42:58.163 }, 00:42:58.163 { 00:42:58.163 "nbd_device": "/dev/nbd1", 00:42:58.163 "bdev_name": "Nvme0n1p2" 00:42:58.163 } 00:42:58.163 ]' 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:42:58.163 { 00:42:58.163 "nbd_device": "/dev/nbd0", 00:42:58.163 "bdev_name": "Nvme0n1p1" 00:42:58.163 }, 00:42:58.163 { 00:42:58.163 "nbd_device": "/dev/nbd1", 00:42:58.163 "bdev_name": "Nvme0n1p2" 00:42:58.163 } 00:42:58.163 ]' 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:42:58.163 /dev/nbd1' 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:42:58.163 /dev/nbd1' 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:42:58.163 256+0 records in 00:42:58.163 256+0 records out 00:42:58.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00747618 s, 140 MB/s 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:42:58.163 256+0 records in 00:42:58.163 256+0 records out 00:42:58.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100913 s, 10.4 MB/s 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:42:58.163 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:42:58.421 256+0 records in 00:42:58.421 256+0 records out 00:42:58.421 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0802041 s, 13.1 MB/s 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:58.421 11:54:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:42:58.679 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:58.937 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:42:59.195 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:42:59.195 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:42:59.195 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:42:59.195 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:42:59.195 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:42:59.196 11:54:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:42:59.762 malloc_lvol_verify 00:42:59.762 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:42:59.762 bec0b49e-529f-4e20-b3d9-9a57950e7b39 00:42:59.762 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:00.021 e7f11ab1-b9ee-40fd-826b-4d3aae979bae 00:43:00.021 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:00.279 /dev/nbd0 00:43:00.279 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:43:00.279 mke2fs 1.45.5 (07-Jan-2020) 00:43:00.279 00:43:00.279 Filesystem too small for a journal 00:43:00.279 Creating filesystem with 1024 4k blocks and 1024 inodes 00:43:00.279 00:43:00.279 Allocating group tables: 0/1 done 00:43:00.279 Writing inode tables: 0/1 done 00:43:00.279 Writing superblocks and filesystem accounting information: 0/1 done 00:43:00.279 00:43:00.279 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:43:00.279 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:00.279 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:00.279 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:43:00.279 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:00.279 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:00.279 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:00.279 11:54:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:43:00.538 11:54:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 171694 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 171694 ']' 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 171694 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 171694 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 171694' 00:43:00.539 killing process with pid 171694 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@967 -- # kill 171694 00:43:00.539 11:54:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # wait 171694 00:43:01.477 11:54:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:43:01.477 00:43:01.477 real 0m6.795s 00:43:01.477 user 0m9.425s 00:43:01.477 sys 0m1.459s 00:43:01.477 11:54:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:01.737 11:54:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:01.737 ************************************ 00:43:01.737 END TEST bdev_nbd 00:43:01.737 ************************************ 00:43:01.737 11:54:36 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:43:01.737 11:54:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:43:01.737 11:54:36 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:43:01.737 11:54:36 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:43:01.737 skipping fio tests on NVMe due to multi-ns failures. 00:43:01.737 11:54:36 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:43:01.737 11:54:36 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:01.737 11:54:36 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:01.737 11:54:36 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:43:01.737 11:54:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:01.737 11:54:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:01.737 ************************************ 00:43:01.737 START TEST bdev_verify 00:43:01.737 ************************************ 00:43:01.737 11:54:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:01.737 [2024-07-13 11:54:36.353805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:43:01.737 [2024-07-13 11:54:36.354812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171963 ] 00:43:01.996 [2024-07-13 11:54:36.527946] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:01.996 [2024-07-13 11:54:36.721754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:01.996 [2024-07-13 11:54:36.721766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.564 Running I/O for 5 seconds... 00:43:07.835 00:43:07.835 Latency(us) 00:43:07.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:07.835 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:07.835 Verification LBA range: start 0x0 length 0x4ff80 00:43:07.835 Nvme0n1p1 : 5.03 4252.34 16.61 0.00 0.00 30032.99 2829.96 23950.43 00:43:07.835 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:07.835 Verification LBA range: start 0x4ff80 length 0x4ff80 00:43:07.835 Nvme0n1p1 : 5.02 4105.02 16.04 0.00 0.00 31099.92 4796.04 35031.97 00:43:07.835 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:07.835 Verification LBA range: start 0x0 length 0x4ff7f 00:43:07.835 Nvme0n1p2 : 5.03 4251.17 16.61 0.00 0.00 29995.22 2636.33 25022.84 00:43:07.835 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:07.835 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:43:07.835 Nvme0n1p2 : 5.02 4103.44 16.03 0.00 0.00 31049.21 4289.63 36461.85 00:43:07.835 =================================================================================================================== 00:43:07.835 Total : 16711.95 65.28 0.00 0.00 30534.64 2636.33 36461.85 00:43:08.770 ************************************ 00:43:08.770 END TEST bdev_verify 00:43:08.770 ************************************ 00:43:08.770 00:43:08.770 real 0m7.130s 00:43:08.770 user 0m13.017s 00:43:08.770 sys 0m0.302s 00:43:08.770 11:54:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:08.770 11:54:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:43:08.770 11:54:43 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:43:08.770 11:54:43 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:08.770 11:54:43 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:43:08.770 11:54:43 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:08.770 11:54:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:08.770 ************************************ 00:43:08.770 START TEST bdev_verify_big_io 00:43:08.770 ************************************ 00:43:08.770 11:54:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:09.029 [2024-07-13 11:54:43.543495] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:43:09.029 [2024-07-13 11:54:43.544050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172088 ] 00:43:09.029 [2024-07-13 11:54:43.719778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:09.287 [2024-07-13 11:54:43.910205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:09.287 [2024-07-13 11:54:43.910213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:09.853 Running I/O for 5 seconds... 00:43:15.122 00:43:15.122 Latency(us) 00:43:15.122 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:15.122 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:15.122 Verification LBA range: start 0x0 length 0x4ff8 00:43:15.122 Nvme0n1p1 : 5.12 671.88 41.99 0.00 0.00 187866.84 3961.95 183024.17 00:43:15.122 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:15.122 Verification LBA range: start 0x4ff8 length 0x4ff8 00:43:15.122 Nvme0n1p1 : 5.15 571.22 35.70 0.00 0.00 220374.29 15728.64 253564.74 00:43:15.122 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:15.122 Verification LBA range: start 0x0 length 0x4ff7 00:43:15.122 Nvme0n1p2 : 5.12 660.25 41.27 0.00 0.00 187797.18 4081.11 182070.92 00:43:15.122 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:15.122 Verification LBA range: start 0x4ff7 length 0x4ff7 00:43:15.122 Nvme0n1p2 : 5.16 578.48 36.16 0.00 0.00 212117.88 886.23 181117.67 00:43:15.122 =================================================================================================================== 00:43:15.122 Total : 2481.84 155.11 0.00 0.00 201033.21 886.23 253564.74 00:43:16.498 00:43:16.498 real 0m7.605s 00:43:16.498 user 0m13.991s 00:43:16.498 sys 0m0.293s 00:43:16.498 11:54:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:16.498 11:54:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:43:16.498 ************************************ 00:43:16.498 END TEST bdev_verify_big_io 00:43:16.498 ************************************ 00:43:16.498 11:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:43:16.498 11:54:51 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:16.498 11:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:16.498 11:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:16.498 11:54:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:16.498 ************************************ 00:43:16.498 START TEST bdev_write_zeroes 00:43:16.498 ************************************ 00:43:16.498 11:54:51 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:16.498 [2024-07-13 11:54:51.190075] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:43:16.498 [2024-07-13 11:54:51.190766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172209 ] 00:43:16.756 [2024-07-13 11:54:51.365055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:17.015 [2024-07-13 11:54:51.580289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:17.273 Running I/O for 1 seconds... 00:43:18.645 00:43:18.645 Latency(us) 00:43:18.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:18.645 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:18.645 Nvme0n1p1 : 1.00 28043.49 109.54 0.00 0.00 4554.75 2115.03 14477.50 00:43:18.645 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:18.645 Nvme0n1p2 : 1.01 27980.88 109.30 0.00 0.00 4558.43 2368.23 14120.03 00:43:18.645 =================================================================================================================== 00:43:18.645 Total : 56024.37 218.85 0.00 0.00 4556.59 2115.03 14477.50 00:43:19.578 ************************************ 00:43:19.578 END TEST bdev_write_zeroes 00:43:19.578 ************************************ 00:43:19.578 00:43:19.578 real 0m2.906s 00:43:19.578 user 0m2.497s 00:43:19.578 sys 0m0.309s 00:43:19.578 11:54:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:19.578 11:54:54 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:43:19.578 11:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:43:19.578 11:54:54 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:19.578 11:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:19.578 11:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:19.578 11:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:19.578 ************************************ 00:43:19.578 START TEST bdev_json_nonenclosed 00:43:19.578 ************************************ 00:43:19.578 11:54:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:19.578 [2024-07-13 11:54:54.165974] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:43:19.578 [2024-07-13 11:54:54.166363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172259 ] 00:43:19.836 [2024-07-13 11:54:54.339461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:19.836 [2024-07-13 11:54:54.536781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:19.836 [2024-07-13 11:54:54.537267] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:43:19.836 [2024-07-13 11:54:54.537439] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:19.836 [2024-07-13 11:54:54.537560] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:20.404 ************************************ 00:43:20.404 END TEST bdev_json_nonenclosed 00:43:20.404 ************************************ 00:43:20.404 00:43:20.404 real 0m0.791s 00:43:20.404 user 0m0.521s 00:43:20.404 sys 0m0.169s 00:43:20.404 11:54:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:43:20.404 11:54:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:20.404 11:54:54 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:43:20.404 11:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:43:20.404 11:54:54 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # true 00:43:20.404 11:54:54 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:20.404 11:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:43:20.404 11:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:20.404 11:54:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:20.404 ************************************ 00:43:20.404 START TEST bdev_json_nonarray 00:43:20.404 ************************************ 00:43:20.404 11:54:54 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:20.404 [2024-07-13 11:54:54.994656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:43:20.404 [2024-07-13 11:54:54.994979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172298 ] 00:43:20.404 [2024-07-13 11:54:55.148857] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:20.663 [2024-07-13 11:54:55.339744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:20.663 [2024-07-13 11:54:55.340170] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:43:20.663 [2024-07-13 11:54:55.340331] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:20.663 [2024-07-13 11:54:55.340388] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:21.231 ************************************ 00:43:21.231 END TEST bdev_json_nonarray 00:43:21.231 ************************************ 00:43:21.231 00:43:21.231 real 0m0.747s 00:43:21.231 user 0m0.537s 00:43:21.231 sys 0m0.108s 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:43:21.231 11:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 234 00:43:21.231 11:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # true 00:43:21.231 11:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:43:21.231 11:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:43:21.231 11:54:55 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:43:21.231 11:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:21.231 11:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:21.231 11:54:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:21.231 ************************************ 00:43:21.231 START TEST bdev_gpt_uuid 00:43:21.231 ************************************ 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1123 -- # bdev_gpt_uuid 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=172330 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 172330 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@829 -- # '[' -z 172330 ']' 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # local max_retries=100 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:21.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # xtrace_disable 00:43:21.231 11:54:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:21.231 [2024-07-13 11:54:55.826164] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:43:21.231 [2024-07-13 11:54:55.826408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172330 ] 00:43:21.490 [2024-07-13 11:54:55.995262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:21.490 [2024-07-13 11:54:56.193809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.426 11:54:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:43:22.426 11:54:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # return 0 00:43:22.426 11:54:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:22.426 11:54:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:22.426 11:54:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:22.426 Some configs were skipped because the RPC state that can call them passed over. 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:43:22.426 { 00:43:22.426 "name": "Nvme0n1p1", 00:43:22.426 "aliases": [ 00:43:22.426 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:43:22.426 ], 00:43:22.426 "product_name": "GPT Disk", 00:43:22.426 "block_size": 4096, 00:43:22.426 "num_blocks": 655104, 00:43:22.426 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:43:22.426 "assigned_rate_limits": { 00:43:22.426 "rw_ios_per_sec": 0, 00:43:22.426 "rw_mbytes_per_sec": 0, 00:43:22.426 "r_mbytes_per_sec": 0, 00:43:22.426 "w_mbytes_per_sec": 0 00:43:22.426 }, 00:43:22.426 "claimed": false, 00:43:22.426 "zoned": false, 00:43:22.426 "supported_io_types": { 00:43:22.426 "read": true, 00:43:22.426 "write": true, 00:43:22.426 "unmap": true, 00:43:22.426 "flush": true, 00:43:22.426 "reset": true, 00:43:22.426 "nvme_admin": false, 00:43:22.426 "nvme_io": false, 00:43:22.426 "nvme_io_md": false, 00:43:22.426 "write_zeroes": true, 00:43:22.426 "zcopy": false, 00:43:22.426 "get_zone_info": false, 00:43:22.426 "zone_management": false, 00:43:22.426 "zone_append": false, 00:43:22.426 "compare": true, 00:43:22.426 "compare_and_write": false, 00:43:22.426 "abort": true, 00:43:22.426 "seek_hole": false, 00:43:22.426 "seek_data": false, 00:43:22.426 "copy": true, 00:43:22.426 "nvme_iov_md": false 00:43:22.426 }, 00:43:22.426 "driver_specific": { 00:43:22.426 "gpt": { 00:43:22.426 "base_bdev": "Nvme0n1", 00:43:22.426 "offset_blocks": 256, 00:43:22.426 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:43:22.426 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:43:22.426 "partition_name": "SPDK_TEST_first" 00:43:22.426 } 00:43:22.426 } 00:43:22.426 } 00:43:22.426 ]' 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:43:22.426 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:43:22.687 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:43:22.687 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:43:22.687 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:22.687 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:22.687 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:22.687 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:43:22.687 { 00:43:22.687 "name": "Nvme0n1p2", 00:43:22.687 "aliases": [ 00:43:22.687 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:43:22.687 ], 00:43:22.687 "product_name": "GPT Disk", 00:43:22.687 "block_size": 4096, 00:43:22.687 "num_blocks": 655103, 00:43:22.687 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:43:22.687 "assigned_rate_limits": { 00:43:22.687 "rw_ios_per_sec": 0, 00:43:22.687 "rw_mbytes_per_sec": 0, 00:43:22.687 "r_mbytes_per_sec": 0, 00:43:22.687 "w_mbytes_per_sec": 0 00:43:22.687 }, 00:43:22.687 "claimed": false, 00:43:22.687 "zoned": false, 00:43:22.688 "supported_io_types": { 00:43:22.688 "read": true, 00:43:22.688 "write": true, 00:43:22.688 "unmap": true, 00:43:22.688 "flush": true, 00:43:22.688 "reset": true, 00:43:22.688 "nvme_admin": false, 00:43:22.688 "nvme_io": false, 00:43:22.688 "nvme_io_md": false, 00:43:22.688 "write_zeroes": true, 00:43:22.688 "zcopy": false, 00:43:22.688 "get_zone_info": false, 00:43:22.688 "zone_management": false, 00:43:22.688 "zone_append": false, 00:43:22.688 "compare": true, 00:43:22.688 "compare_and_write": false, 00:43:22.688 "abort": true, 00:43:22.688 "seek_hole": false, 00:43:22.688 "seek_data": false, 00:43:22.688 "copy": true, 00:43:22.688 "nvme_iov_md": false 00:43:22.688 }, 00:43:22.688 "driver_specific": { 00:43:22.688 "gpt": { 00:43:22.688 "base_bdev": "Nvme0n1", 00:43:22.688 "offset_blocks": 655360, 00:43:22.688 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:43:22.688 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:43:22.688 "partition_name": "SPDK_TEST_second" 00:43:22.688 } 00:43:22.688 } 00:43:22.688 } 00:43:22.688 ]' 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 172330 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@948 -- # '[' -z 172330 ']' 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # kill -0 172330 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # uname 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172330 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:43:22.688 killing process with pid 172330 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172330' 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@967 -- # kill 172330 00:43:22.688 11:54:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # wait 172330 00:43:24.619 00:43:24.619 real 0m3.620s 00:43:24.619 user 0m3.773s 00:43:24.619 sys 0m0.542s 00:43:24.619 ************************************ 00:43:24.619 END TEST bdev_gpt_uuid 00:43:24.619 ************************************ 00:43:24.619 11:54:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:24.619 11:54:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:43:24.877 11:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1142 -- # return 0 00:43:24.877 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:43:24.877 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:43:24.877 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:43:24.877 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:43:24.877 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:24.877 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:43:24.877 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:43:24.877 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:43:24.877 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:43:25.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:25.135 Waiting for block devices as requested 00:43:25.135 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:43:25.135 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:43:25.135 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:43:25.393 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:43:25.393 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:43:25.393 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:43:25.393 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:43:25.393 11:54:59 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:43:25.393 00:43:25.393 real 0m42.609s 00:43:25.393 user 0m59.465s 00:43:25.393 sys 0m6.203s 00:43:25.393 ************************************ 00:43:25.393 END TEST blockdev_nvme_gpt 00:43:25.393 ************************************ 00:43:25.393 11:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:25.393 11:54:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:43:25.393 11:54:59 -- common/autotest_common.sh@1142 -- # return 0 00:43:25.393 11:54:59 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:43:25.393 11:54:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:25.393 11:54:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:25.393 11:54:59 -- common/autotest_common.sh@10 -- # set +x 00:43:25.393 ************************************ 00:43:25.393 START TEST nvme 00:43:25.393 ************************************ 00:43:25.393 11:54:59 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:43:25.393 * Looking for test storage... 00:43:25.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:43:25.393 11:55:00 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:43:25.651 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:25.910 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:43:27.286 11:55:01 nvme -- nvme/nvme.sh@79 -- # uname 00:43:27.286 11:55:01 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:43:27.286 11:55:01 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:43:27.286 11:55:01 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:43:27.286 11:55:01 nvme -- common/autotest_common.sh@1080 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:43:27.286 11:55:01 nvme -- common/autotest_common.sh@1066 -- # _randomize_va_space=2 00:43:27.286 11:55:01 nvme -- common/autotest_common.sh@1067 -- # echo 0 00:43:27.286 11:55:01 nvme -- common/autotest_common.sh@1069 -- # stubpid=172793 00:43:27.286 Waiting for stub to ready for secondary processes... 00:43:27.286 11:55:01 nvme -- common/autotest_common.sh@1070 -- # echo Waiting for stub to ready for secondary processes... 00:43:27.286 11:55:01 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:43:27.286 11:55:01 nvme -- common/autotest_common.sh@1068 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:43:27.286 11:55:01 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/172793 ]] 00:43:27.286 11:55:01 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:43:27.286 [2024-07-13 11:55:01.679709] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:43:27.286 [2024-07-13 11:55:01.679950] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:43:28.221 11:55:02 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:43:28.221 11:55:02 nvme -- common/autotest_common.sh@1073 -- # [[ -e /proc/172793 ]] 00:43:28.221 11:55:02 nvme -- common/autotest_common.sh@1074 -- # sleep 1s 00:43:28.221 [2024-07-13 11:55:02.899488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:28.479 [2024-07-13 11:55:03.105832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:28.479 [2024-07-13 11:55:03.105963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:43:28.479 [2024-07-13 11:55:03.105968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:28.479 [2024-07-13 11:55:03.114188] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:43:28.479 [2024-07-13 11:55:03.114311] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:43:28.479 [2024-07-13 11:55:03.122302] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:43:28.479 [2024-07-13 11:55:03.122588] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:43:29.045 11:55:03 nvme -- common/autotest_common.sh@1071 -- # '[' -e /var/run/spdk_stub0 ']' 00:43:29.045 11:55:03 nvme -- common/autotest_common.sh@1076 -- # echo done. 00:43:29.045 done. 00:43:29.045 11:55:03 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:43:29.045 11:55:03 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:43:29.045 11:55:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:29.045 11:55:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:29.045 ************************************ 00:43:29.045 START TEST nvme_reset 00:43:29.045 ************************************ 00:43:29.045 11:55:03 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:43:29.303 Initializing NVMe Controllers 00:43:29.303 Skipping QEMU NVMe SSD at 0000:00:10.0 00:43:29.303 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:43:29.303 00:43:29.303 real 0m0.316s 00:43:29.303 user 0m0.106s 00:43:29.303 sys 0m0.138s 00:43:29.303 11:55:03 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:29.303 ************************************ 00:43:29.303 END TEST nvme_reset 00:43:29.303 ************************************ 00:43:29.303 11:55:03 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:43:29.303 11:55:03 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:29.303 11:55:03 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:43:29.303 11:55:03 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:29.303 11:55:03 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:29.303 11:55:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:29.303 ************************************ 00:43:29.303 START TEST nvme_identify 00:43:29.303 ************************************ 00:43:29.303 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:43:29.303 11:55:04 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:43:29.303 11:55:04 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:43:29.303 11:55:04 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:43:29.303 11:55:04 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:43:29.303 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:43:29.303 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:43:29.304 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:29.304 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:29.304 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:43:29.562 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:43:29.562 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:43:29.562 11:55:04 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:43:29.821 [2024-07-13 11:55:04.322909] nvme_ctrlr.c:3604:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 172825 terminated unexpected 00:43:29.821 ===================================================== 00:43:29.821 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:29.821 ===================================================== 00:43:29.821 Controller Capabilities/Features 00:43:29.821 ================================ 00:43:29.821 Vendor ID: 1b36 00:43:29.821 Subsystem Vendor ID: 1af4 00:43:29.821 Serial Number: 12340 00:43:29.821 Model Number: QEMU NVMe Ctrl 00:43:29.821 Firmware Version: 8.0.0 00:43:29.821 Recommended Arb Burst: 6 00:43:29.821 IEEE OUI Identifier: 00 54 52 00:43:29.821 Multi-path I/O 00:43:29.821 May have multiple subsystem ports: No 00:43:29.821 May have multiple controllers: No 00:43:29.821 Associated with SR-IOV VF: No 00:43:29.821 Max Data Transfer Size: 524288 00:43:29.821 Max Number of Namespaces: 256 00:43:29.821 Max Number of I/O Queues: 64 00:43:29.821 NVMe Specification Version (VS): 1.4 00:43:29.821 NVMe Specification Version (Identify): 1.4 00:43:29.821 Maximum Queue Entries: 2048 00:43:29.821 Contiguous Queues Required: Yes 00:43:29.821 Arbitration Mechanisms Supported 00:43:29.821 Weighted Round Robin: Not Supported 00:43:29.821 Vendor Specific: Not Supported 00:43:29.821 Reset Timeout: 7500 ms 00:43:29.821 Doorbell Stride: 4 bytes 00:43:29.821 NVM Subsystem Reset: Not Supported 00:43:29.821 Command Sets Supported 00:43:29.821 NVM Command Set: Supported 00:43:29.821 Boot Partition: Not Supported 00:43:29.821 Memory Page Size Minimum: 4096 bytes 00:43:29.821 Memory Page Size Maximum: 65536 bytes 00:43:29.821 Persistent Memory Region: Not Supported 00:43:29.821 Optional Asynchronous Events Supported 00:43:29.821 Namespace Attribute Notices: Supported 00:43:29.821 Firmware Activation Notices: Not Supported 00:43:29.821 ANA Change Notices: Not Supported 00:43:29.821 PLE Aggregate Log Change Notices: Not Supported 00:43:29.821 LBA Status Info Alert Notices: Not Supported 00:43:29.821 EGE Aggregate Log Change Notices: Not Supported 00:43:29.821 Normal NVM Subsystem Shutdown event: Not Supported 00:43:29.821 Zone Descriptor Change Notices: Not Supported 00:43:29.821 Discovery Log Change Notices: Not Supported 00:43:29.821 Controller Attributes 00:43:29.821 128-bit Host Identifier: Not Supported 00:43:29.821 Non-Operational Permissive Mode: Not Supported 00:43:29.821 NVM Sets: Not Supported 00:43:29.821 Read Recovery Levels: Not Supported 00:43:29.821 Endurance Groups: Not Supported 00:43:29.821 Predictable Latency Mode: Not Supported 00:43:29.821 Traffic Based Keep ALive: Not Supported 00:43:29.821 Namespace Granularity: Not Supported 00:43:29.821 SQ Associations: Not Supported 00:43:29.821 UUID List: Not Supported 00:43:29.821 Multi-Domain Subsystem: Not Supported 00:43:29.821 Fixed Capacity Management: Not Supported 00:43:29.821 Variable Capacity Management: Not Supported 00:43:29.821 Delete Endurance Group: Not Supported 00:43:29.821 Delete NVM Set: Not Supported 00:43:29.821 Extended LBA Formats Supported: Supported 00:43:29.821 Flexible Data Placement Supported: Not Supported 00:43:29.821 00:43:29.821 Controller Memory Buffer Support 00:43:29.821 ================================ 00:43:29.821 Supported: No 00:43:29.821 00:43:29.821 Persistent Memory Region Support 00:43:29.821 ================================ 00:43:29.821 Supported: No 00:43:29.821 00:43:29.821 Admin Command Set Attributes 00:43:29.821 ============================ 00:43:29.821 Security Send/Receive: Not Supported 00:43:29.821 Format NVM: Supported 00:43:29.821 Firmware Activate/Download: Not Supported 00:43:29.821 Namespace Management: Supported 00:43:29.821 Device Self-Test: Not Supported 00:43:29.821 Directives: Supported 00:43:29.821 NVMe-MI: Not Supported 00:43:29.821 Virtualization Management: Not Supported 00:43:29.821 Doorbell Buffer Config: Supported 00:43:29.821 Get LBA Status Capability: Not Supported 00:43:29.821 Command & Feature Lockdown Capability: Not Supported 00:43:29.821 Abort Command Limit: 4 00:43:29.821 Async Event Request Limit: 4 00:43:29.821 Number of Firmware Slots: N/A 00:43:29.821 Firmware Slot 1 Read-Only: N/A 00:43:29.821 Firmware Activation Without Reset: N/A 00:43:29.821 Multiple Update Detection Support: N/A 00:43:29.821 Firmware Update Granularity: No Information Provided 00:43:29.821 Per-Namespace SMART Log: Yes 00:43:29.821 Asymmetric Namespace Access Log Page: Not Supported 00:43:29.821 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:43:29.821 Command Effects Log Page: Supported 00:43:29.821 Get Log Page Extended Data: Supported 00:43:29.821 Telemetry Log Pages: Not Supported 00:43:29.821 Persistent Event Log Pages: Not Supported 00:43:29.821 Supported Log Pages Log Page: May Support 00:43:29.821 Commands Supported & Effects Log Page: Not Supported 00:43:29.821 Feature Identifiers & Effects Log Page:May Support 00:43:29.821 NVMe-MI Commands & Effects Log Page: May Support 00:43:29.821 Data Area 4 for Telemetry Log: Not Supported 00:43:29.821 Error Log Page Entries Supported: 1 00:43:29.821 Keep Alive: Not Supported 00:43:29.821 00:43:29.821 NVM Command Set Attributes 00:43:29.821 ========================== 00:43:29.821 Submission Queue Entry Size 00:43:29.821 Max: 64 00:43:29.821 Min: 64 00:43:29.821 Completion Queue Entry Size 00:43:29.821 Max: 16 00:43:29.821 Min: 16 00:43:29.821 Number of Namespaces: 256 00:43:29.821 Compare Command: Supported 00:43:29.821 Write Uncorrectable Command: Not Supported 00:43:29.821 Dataset Management Command: Supported 00:43:29.821 Write Zeroes Command: Supported 00:43:29.821 Set Features Save Field: Supported 00:43:29.821 Reservations: Not Supported 00:43:29.821 Timestamp: Supported 00:43:29.821 Copy: Supported 00:43:29.821 Volatile Write Cache: Present 00:43:29.821 Atomic Write Unit (Normal): 1 00:43:29.821 Atomic Write Unit (PFail): 1 00:43:29.821 Atomic Compare & Write Unit: 1 00:43:29.821 Fused Compare & Write: Not Supported 00:43:29.821 Scatter-Gather List 00:43:29.821 SGL Command Set: Supported 00:43:29.821 SGL Keyed: Not Supported 00:43:29.821 SGL Bit Bucket Descriptor: Not Supported 00:43:29.821 SGL Metadata Pointer: Not Supported 00:43:29.821 Oversized SGL: Not Supported 00:43:29.821 SGL Metadata Address: Not Supported 00:43:29.821 SGL Offset: Not Supported 00:43:29.821 Transport SGL Data Block: Not Supported 00:43:29.821 Replay Protected Memory Block: Not Supported 00:43:29.821 00:43:29.821 Firmware Slot Information 00:43:29.821 ========================= 00:43:29.821 Active slot: 1 00:43:29.821 Slot 1 Firmware Revision: 1.0 00:43:29.821 00:43:29.821 00:43:29.821 Commands Supported and Effects 00:43:29.821 ============================== 00:43:29.821 Admin Commands 00:43:29.821 -------------- 00:43:29.821 Delete I/O Submission Queue (00h): Supported 00:43:29.821 Create I/O Submission Queue (01h): Supported 00:43:29.821 Get Log Page (02h): Supported 00:43:29.821 Delete I/O Completion Queue (04h): Supported 00:43:29.821 Create I/O Completion Queue (05h): Supported 00:43:29.821 Identify (06h): Supported 00:43:29.821 Abort (08h): Supported 00:43:29.821 Set Features (09h): Supported 00:43:29.821 Get Features (0Ah): Supported 00:43:29.821 Asynchronous Event Request (0Ch): Supported 00:43:29.821 Namespace Attachment (15h): Supported NS-Inventory-Change 00:43:29.821 Directive Send (19h): Supported 00:43:29.821 Directive Receive (1Ah): Supported 00:43:29.821 Virtualization Management (1Ch): Supported 00:43:29.821 Doorbell Buffer Config (7Ch): Supported 00:43:29.821 Format NVM (80h): Supported LBA-Change 00:43:29.821 I/O Commands 00:43:29.821 ------------ 00:43:29.822 Flush (00h): Supported LBA-Change 00:43:29.822 Write (01h): Supported LBA-Change 00:43:29.822 Read (02h): Supported 00:43:29.822 Compare (05h): Supported 00:43:29.822 Write Zeroes (08h): Supported LBA-Change 00:43:29.822 Dataset Management (09h): Supported LBA-Change 00:43:29.822 Unknown (0Ch): Supported 00:43:29.822 Unknown (12h): Supported 00:43:29.822 Copy (19h): Supported LBA-Change 00:43:29.822 Unknown (1Dh): Supported LBA-Change 00:43:29.822 00:43:29.822 Error Log 00:43:29.822 ========= 00:43:29.822 00:43:29.822 Arbitration 00:43:29.822 =========== 00:43:29.822 Arbitration Burst: no limit 00:43:29.822 00:43:29.822 Power Management 00:43:29.822 ================ 00:43:29.822 Number of Power States: 1 00:43:29.822 Current Power State: Power State #0 00:43:29.822 Power State #0: 00:43:29.822 Max Power: 25.00 W 00:43:29.822 Non-Operational State: Operational 00:43:29.822 Entry Latency: 16 microseconds 00:43:29.822 Exit Latency: 4 microseconds 00:43:29.822 Relative Read Throughput: 0 00:43:29.822 Relative Read Latency: 0 00:43:29.822 Relative Write Throughput: 0 00:43:29.822 Relative Write Latency: 0 00:43:29.822 Idle Power: Not Reported 00:43:29.822 Active Power: Not Reported 00:43:29.822 Non-Operational Permissive Mode: Not Supported 00:43:29.822 00:43:29.822 Health Information 00:43:29.822 ================== 00:43:29.822 Critical Warnings: 00:43:29.822 Available Spare Space: OK 00:43:29.822 Temperature: OK 00:43:29.822 Device Reliability: OK 00:43:29.822 Read Only: No 00:43:29.822 Volatile Memory Backup: OK 00:43:29.822 Current Temperature: 323 Kelvin (50 Celsius) 00:43:29.822 Temperature Threshold: 343 Kelvin (70 Celsius) 00:43:29.822 Available Spare: 0% 00:43:29.822 Available Spare Threshold: 0% 00:43:29.822 Life Percentage Used: 0% 00:43:29.822 Data Units Read: 5105 00:43:29.822 Data Units Written: 4753 00:43:29.822 Host Read Commands: 204896 00:43:29.822 Host Write Commands: 217764 00:43:29.822 Controller Busy Time: 0 minutes 00:43:29.822 Power Cycles: 0 00:43:29.822 Power On Hours: 0 hours 00:43:29.822 Unsafe Shutdowns: 0 00:43:29.822 Unrecoverable Media Errors: 0 00:43:29.822 Lifetime Error Log Entries: 0 00:43:29.822 Warning Temperature Time: 0 minutes 00:43:29.822 Critical Temperature Time: 0 minutes 00:43:29.822 00:43:29.822 Number of Queues 00:43:29.822 ================ 00:43:29.822 Number of I/O Submission Queues: 64 00:43:29.822 Number of I/O Completion Queues: 64 00:43:29.822 00:43:29.822 ZNS Specific Controller Data 00:43:29.822 ============================ 00:43:29.822 Zone Append Size Limit: 0 00:43:29.822 00:43:29.822 00:43:29.822 Active Namespaces 00:43:29.822 ================= 00:43:29.822 Namespace ID:1 00:43:29.822 Error Recovery Timeout: Unlimited 00:43:29.822 Command Set Identifier: NVM (00h) 00:43:29.822 Deallocate: Supported 00:43:29.822 Deallocated/Unwritten Error: Supported 00:43:29.822 Deallocated Read Value: All 0x00 00:43:29.822 Deallocate in Write Zeroes: Not Supported 00:43:29.822 Deallocated Guard Field: 0xFFFF 00:43:29.822 Flush: Supported 00:43:29.822 Reservation: Not Supported 00:43:29.822 Namespace Sharing Capabilities: Private 00:43:29.822 Size (in LBAs): 1310720 (5GiB) 00:43:29.822 Capacity (in LBAs): 1310720 (5GiB) 00:43:29.822 Utilization (in LBAs): 1310720 (5GiB) 00:43:29.822 Thin Provisioning: Not Supported 00:43:29.822 Per-NS Atomic Units: No 00:43:29.822 Maximum Single Source Range Length: 128 00:43:29.822 Maximum Copy Length: 128 00:43:29.822 Maximum Source Range Count: 128 00:43:29.822 NGUID/EUI64 Never Reused: No 00:43:29.822 Namespace Write Protected: No 00:43:29.822 Number of LBA Formats: 8 00:43:29.822 Current LBA Format: LBA Format #04 00:43:29.822 LBA Format #00: Data Size: 512 Metadata Size: 0 00:43:29.822 LBA Format #01: Data Size: 512 Metadata Size: 8 00:43:29.822 LBA Format #02: Data Size: 512 Metadata Size: 16 00:43:29.822 LBA Format #03: Data Size: 512 Metadata Size: 64 00:43:29.822 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:43:29.822 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:43:29.822 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:43:29.822 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:43:29.822 00:43:29.822 NVM Specific Namespace Data 00:43:29.822 =========================== 00:43:29.822 Logical Block Storage Tag Mask: 0 00:43:29.822 Protection Information Capabilities: 00:43:29.822 16b Guard Protection Information Storage Tag Support: No 00:43:29.822 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:43:29.822 Storage Tag Check Read Support: No 00:43:29.822 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:29.822 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:29.822 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:29.822 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:29.822 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:29.822 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:29.822 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:29.822 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:29.822 11:55:04 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:43:29.822 11:55:04 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:43:30.081 ===================================================== 00:43:30.081 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:30.081 ===================================================== 00:43:30.081 Controller Capabilities/Features 00:43:30.081 ================================ 00:43:30.081 Vendor ID: 1b36 00:43:30.081 Subsystem Vendor ID: 1af4 00:43:30.081 Serial Number: 12340 00:43:30.081 Model Number: QEMU NVMe Ctrl 00:43:30.081 Firmware Version: 8.0.0 00:43:30.081 Recommended Arb Burst: 6 00:43:30.081 IEEE OUI Identifier: 00 54 52 00:43:30.081 Multi-path I/O 00:43:30.081 May have multiple subsystem ports: No 00:43:30.081 May have multiple controllers: No 00:43:30.081 Associated with SR-IOV VF: No 00:43:30.081 Max Data Transfer Size: 524288 00:43:30.081 Max Number of Namespaces: 256 00:43:30.081 Max Number of I/O Queues: 64 00:43:30.081 NVMe Specification Version (VS): 1.4 00:43:30.081 NVMe Specification Version (Identify): 1.4 00:43:30.081 Maximum Queue Entries: 2048 00:43:30.081 Contiguous Queues Required: Yes 00:43:30.081 Arbitration Mechanisms Supported 00:43:30.081 Weighted Round Robin: Not Supported 00:43:30.081 Vendor Specific: Not Supported 00:43:30.081 Reset Timeout: 7500 ms 00:43:30.081 Doorbell Stride: 4 bytes 00:43:30.081 NVM Subsystem Reset: Not Supported 00:43:30.081 Command Sets Supported 00:43:30.081 NVM Command Set: Supported 00:43:30.081 Boot Partition: Not Supported 00:43:30.081 Memory Page Size Minimum: 4096 bytes 00:43:30.081 Memory Page Size Maximum: 65536 bytes 00:43:30.081 Persistent Memory Region: Not Supported 00:43:30.081 Optional Asynchronous Events Supported 00:43:30.081 Namespace Attribute Notices: Supported 00:43:30.081 Firmware Activation Notices: Not Supported 00:43:30.081 ANA Change Notices: Not Supported 00:43:30.081 PLE Aggregate Log Change Notices: Not Supported 00:43:30.081 LBA Status Info Alert Notices: Not Supported 00:43:30.081 EGE Aggregate Log Change Notices: Not Supported 00:43:30.081 Normal NVM Subsystem Shutdown event: Not Supported 00:43:30.081 Zone Descriptor Change Notices: Not Supported 00:43:30.081 Discovery Log Change Notices: Not Supported 00:43:30.081 Controller Attributes 00:43:30.081 128-bit Host Identifier: Not Supported 00:43:30.081 Non-Operational Permissive Mode: Not Supported 00:43:30.081 NVM Sets: Not Supported 00:43:30.081 Read Recovery Levels: Not Supported 00:43:30.081 Endurance Groups: Not Supported 00:43:30.081 Predictable Latency Mode: Not Supported 00:43:30.081 Traffic Based Keep ALive: Not Supported 00:43:30.081 Namespace Granularity: Not Supported 00:43:30.081 SQ Associations: Not Supported 00:43:30.081 UUID List: Not Supported 00:43:30.081 Multi-Domain Subsystem: Not Supported 00:43:30.081 Fixed Capacity Management: Not Supported 00:43:30.081 Variable Capacity Management: Not Supported 00:43:30.081 Delete Endurance Group: Not Supported 00:43:30.081 Delete NVM Set: Not Supported 00:43:30.081 Extended LBA Formats Supported: Supported 00:43:30.081 Flexible Data Placement Supported: Not Supported 00:43:30.081 00:43:30.081 Controller Memory Buffer Support 00:43:30.081 ================================ 00:43:30.081 Supported: No 00:43:30.081 00:43:30.081 Persistent Memory Region Support 00:43:30.081 ================================ 00:43:30.081 Supported: No 00:43:30.081 00:43:30.081 Admin Command Set Attributes 00:43:30.081 ============================ 00:43:30.081 Security Send/Receive: Not Supported 00:43:30.081 Format NVM: Supported 00:43:30.081 Firmware Activate/Download: Not Supported 00:43:30.081 Namespace Management: Supported 00:43:30.081 Device Self-Test: Not Supported 00:43:30.081 Directives: Supported 00:43:30.081 NVMe-MI: Not Supported 00:43:30.081 Virtualization Management: Not Supported 00:43:30.081 Doorbell Buffer Config: Supported 00:43:30.081 Get LBA Status Capability: Not Supported 00:43:30.081 Command & Feature Lockdown Capability: Not Supported 00:43:30.081 Abort Command Limit: 4 00:43:30.081 Async Event Request Limit: 4 00:43:30.081 Number of Firmware Slots: N/A 00:43:30.081 Firmware Slot 1 Read-Only: N/A 00:43:30.081 Firmware Activation Without Reset: N/A 00:43:30.081 Multiple Update Detection Support: N/A 00:43:30.081 Firmware Update Granularity: No Information Provided 00:43:30.081 Per-Namespace SMART Log: Yes 00:43:30.081 Asymmetric Namespace Access Log Page: Not Supported 00:43:30.081 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:43:30.081 Command Effects Log Page: Supported 00:43:30.081 Get Log Page Extended Data: Supported 00:43:30.081 Telemetry Log Pages: Not Supported 00:43:30.081 Persistent Event Log Pages: Not Supported 00:43:30.081 Supported Log Pages Log Page: May Support 00:43:30.081 Commands Supported & Effects Log Page: Not Supported 00:43:30.081 Feature Identifiers & Effects Log Page:May Support 00:43:30.081 NVMe-MI Commands & Effects Log Page: May Support 00:43:30.081 Data Area 4 for Telemetry Log: Not Supported 00:43:30.081 Error Log Page Entries Supported: 1 00:43:30.081 Keep Alive: Not Supported 00:43:30.081 00:43:30.081 NVM Command Set Attributes 00:43:30.081 ========================== 00:43:30.081 Submission Queue Entry Size 00:43:30.082 Max: 64 00:43:30.082 Min: 64 00:43:30.082 Completion Queue Entry Size 00:43:30.082 Max: 16 00:43:30.082 Min: 16 00:43:30.082 Number of Namespaces: 256 00:43:30.082 Compare Command: Supported 00:43:30.082 Write Uncorrectable Command: Not Supported 00:43:30.082 Dataset Management Command: Supported 00:43:30.082 Write Zeroes Command: Supported 00:43:30.082 Set Features Save Field: Supported 00:43:30.082 Reservations: Not Supported 00:43:30.082 Timestamp: Supported 00:43:30.082 Copy: Supported 00:43:30.082 Volatile Write Cache: Present 00:43:30.082 Atomic Write Unit (Normal): 1 00:43:30.082 Atomic Write Unit (PFail): 1 00:43:30.082 Atomic Compare & Write Unit: 1 00:43:30.082 Fused Compare & Write: Not Supported 00:43:30.082 Scatter-Gather List 00:43:30.082 SGL Command Set: Supported 00:43:30.082 SGL Keyed: Not Supported 00:43:30.082 SGL Bit Bucket Descriptor: Not Supported 00:43:30.082 SGL Metadata Pointer: Not Supported 00:43:30.082 Oversized SGL: Not Supported 00:43:30.082 SGL Metadata Address: Not Supported 00:43:30.082 SGL Offset: Not Supported 00:43:30.082 Transport SGL Data Block: Not Supported 00:43:30.082 Replay Protected Memory Block: Not Supported 00:43:30.082 00:43:30.082 Firmware Slot Information 00:43:30.082 ========================= 00:43:30.082 Active slot: 1 00:43:30.082 Slot 1 Firmware Revision: 1.0 00:43:30.082 00:43:30.082 00:43:30.082 Commands Supported and Effects 00:43:30.082 ============================== 00:43:30.082 Admin Commands 00:43:30.082 -------------- 00:43:30.082 Delete I/O Submission Queue (00h): Supported 00:43:30.082 Create I/O Submission Queue (01h): Supported 00:43:30.082 Get Log Page (02h): Supported 00:43:30.082 Delete I/O Completion Queue (04h): Supported 00:43:30.082 Create I/O Completion Queue (05h): Supported 00:43:30.082 Identify (06h): Supported 00:43:30.082 Abort (08h): Supported 00:43:30.082 Set Features (09h): Supported 00:43:30.082 Get Features (0Ah): Supported 00:43:30.082 Asynchronous Event Request (0Ch): Supported 00:43:30.082 Namespace Attachment (15h): Supported NS-Inventory-Change 00:43:30.082 Directive Send (19h): Supported 00:43:30.082 Directive Receive (1Ah): Supported 00:43:30.082 Virtualization Management (1Ch): Supported 00:43:30.082 Doorbell Buffer Config (7Ch): Supported 00:43:30.082 Format NVM (80h): Supported LBA-Change 00:43:30.082 I/O Commands 00:43:30.082 ------------ 00:43:30.082 Flush (00h): Supported LBA-Change 00:43:30.082 Write (01h): Supported LBA-Change 00:43:30.082 Read (02h): Supported 00:43:30.082 Compare (05h): Supported 00:43:30.082 Write Zeroes (08h): Supported LBA-Change 00:43:30.082 Dataset Management (09h): Supported LBA-Change 00:43:30.082 Unknown (0Ch): Supported 00:43:30.082 Unknown (12h): Supported 00:43:30.082 Copy (19h): Supported LBA-Change 00:43:30.082 Unknown (1Dh): Supported LBA-Change 00:43:30.082 00:43:30.082 Error Log 00:43:30.082 ========= 00:43:30.082 00:43:30.082 Arbitration 00:43:30.082 =========== 00:43:30.082 Arbitration Burst: no limit 00:43:30.082 00:43:30.082 Power Management 00:43:30.082 ================ 00:43:30.082 Number of Power States: 1 00:43:30.082 Current Power State: Power State #0 00:43:30.082 Power State #0: 00:43:30.082 Max Power: 25.00 W 00:43:30.082 Non-Operational State: Operational 00:43:30.082 Entry Latency: 16 microseconds 00:43:30.082 Exit Latency: 4 microseconds 00:43:30.082 Relative Read Throughput: 0 00:43:30.082 Relative Read Latency: 0 00:43:30.082 Relative Write Throughput: 0 00:43:30.082 Relative Write Latency: 0 00:43:30.082 Idle Power: Not Reported 00:43:30.082 Active Power: Not Reported 00:43:30.082 Non-Operational Permissive Mode: Not Supported 00:43:30.082 00:43:30.082 Health Information 00:43:30.082 ================== 00:43:30.082 Critical Warnings: 00:43:30.082 Available Spare Space: OK 00:43:30.082 Temperature: OK 00:43:30.082 Device Reliability: OK 00:43:30.082 Read Only: No 00:43:30.082 Volatile Memory Backup: OK 00:43:30.082 Current Temperature: 323 Kelvin (50 Celsius) 00:43:30.082 Temperature Threshold: 343 Kelvin (70 Celsius) 00:43:30.082 Available Spare: 0% 00:43:30.082 Available Spare Threshold: 0% 00:43:30.082 Life Percentage Used: 0% 00:43:30.082 Data Units Read: 5105 00:43:30.082 Data Units Written: 4753 00:43:30.082 Host Read Commands: 204896 00:43:30.082 Host Write Commands: 217764 00:43:30.082 Controller Busy Time: 0 minutes 00:43:30.082 Power Cycles: 0 00:43:30.082 Power On Hours: 0 hours 00:43:30.082 Unsafe Shutdowns: 0 00:43:30.082 Unrecoverable Media Errors: 0 00:43:30.082 Lifetime Error Log Entries: 0 00:43:30.082 Warning Temperature Time: 0 minutes 00:43:30.082 Critical Temperature Time: 0 minutes 00:43:30.082 00:43:30.082 Number of Queues 00:43:30.082 ================ 00:43:30.082 Number of I/O Submission Queues: 64 00:43:30.082 Number of I/O Completion Queues: 64 00:43:30.082 00:43:30.082 ZNS Specific Controller Data 00:43:30.082 ============================ 00:43:30.082 Zone Append Size Limit: 0 00:43:30.082 00:43:30.082 00:43:30.082 Active Namespaces 00:43:30.082 ================= 00:43:30.082 Namespace ID:1 00:43:30.082 Error Recovery Timeout: Unlimited 00:43:30.082 Command Set Identifier: NVM (00h) 00:43:30.082 Deallocate: Supported 00:43:30.082 Deallocated/Unwritten Error: Supported 00:43:30.082 Deallocated Read Value: All 0x00 00:43:30.082 Deallocate in Write Zeroes: Not Supported 00:43:30.082 Deallocated Guard Field: 0xFFFF 00:43:30.082 Flush: Supported 00:43:30.082 Reservation: Not Supported 00:43:30.082 Namespace Sharing Capabilities: Private 00:43:30.082 Size (in LBAs): 1310720 (5GiB) 00:43:30.082 Capacity (in LBAs): 1310720 (5GiB) 00:43:30.082 Utilization (in LBAs): 1310720 (5GiB) 00:43:30.082 Thin Provisioning: Not Supported 00:43:30.082 Per-NS Atomic Units: No 00:43:30.082 Maximum Single Source Range Length: 128 00:43:30.082 Maximum Copy Length: 128 00:43:30.082 Maximum Source Range Count: 128 00:43:30.082 NGUID/EUI64 Never Reused: No 00:43:30.082 Namespace Write Protected: No 00:43:30.082 Number of LBA Formats: 8 00:43:30.082 Current LBA Format: LBA Format #04 00:43:30.082 LBA Format #00: Data Size: 512 Metadata Size: 0 00:43:30.082 LBA Format #01: Data Size: 512 Metadata Size: 8 00:43:30.082 LBA Format #02: Data Size: 512 Metadata Size: 16 00:43:30.082 LBA Format #03: Data Size: 512 Metadata Size: 64 00:43:30.082 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:43:30.082 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:43:30.082 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:43:30.082 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:43:30.082 00:43:30.082 NVM Specific Namespace Data 00:43:30.082 =========================== 00:43:30.082 Logical Block Storage Tag Mask: 0 00:43:30.082 Protection Information Capabilities: 00:43:30.082 16b Guard Protection Information Storage Tag Support: No 00:43:30.082 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:43:30.082 Storage Tag Check Read Support: No 00:43:30.082 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:30.082 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:30.082 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:30.082 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:30.082 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:30.082 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:30.082 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:30.082 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:43:30.082 00:43:30.082 real 0m0.682s 00:43:30.082 user 0m0.295s 00:43:30.082 sys 0m0.305s 00:43:30.082 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:30.082 11:55:04 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:43:30.082 ************************************ 00:43:30.082 END TEST nvme_identify 00:43:30.082 ************************************ 00:43:30.082 11:55:04 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:30.083 11:55:04 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:43:30.083 11:55:04 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:30.083 11:55:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:30.083 11:55:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:30.083 ************************************ 00:43:30.083 START TEST nvme_perf 00:43:30.083 ************************************ 00:43:30.083 11:55:04 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:43:30.083 11:55:04 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:43:31.461 Initializing NVMe Controllers 00:43:31.461 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:31.461 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:31.461 Initialization complete. Launching workers. 00:43:31.461 ======================================================== 00:43:31.461 Latency(us) 00:43:31.461 Device Information : IOPS MiB/s Average min max 00:43:31.461 PCIE (0000:00:10.0) NSID 1 from core 0: 75873.35 889.14 1685.53 855.40 7806.53 00:43:31.461 ======================================================== 00:43:31.461 Total : 75873.35 889.14 1685.53 855.40 7806.53 00:43:31.461 00:43:31.461 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:31.461 ================================================================================= 00:43:31.461 1.00000% : 1020.276us 00:43:31.461 10.00000% : 1243.695us 00:43:31.461 25.00000% : 1429.876us 00:43:31.461 50.00000% : 1675.636us 00:43:31.461 75.00000% : 1921.396us 00:43:31.461 90.00000% : 2129.920us 00:43:31.461 95.00000% : 2234.182us 00:43:31.461 98.00000% : 2368.233us 00:43:31.461 99.00000% : 2561.862us 00:43:31.461 99.50000% : 2874.647us 00:43:31.461 99.90000% : 4676.887us 00:43:31.461 99.99000% : 7506.851us 00:43:31.461 99.99900% : 7864.320us 00:43:31.461 99.99990% : 7864.320us 00:43:31.461 99.99999% : 7864.320us 00:43:31.461 00:43:31.461 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:31.461 ============================================================================== 00:43:31.461 Range in us Cumulative IO count 00:43:31.461 852.713 - 856.436: 0.0013% ( 1) 00:43:31.461 863.884 - 867.607: 0.0040% ( 2) 00:43:31.461 867.607 - 871.331: 0.0053% ( 1) 00:43:31.461 871.331 - 875.055: 0.0066% ( 1) 00:43:31.461 875.055 - 878.778: 0.0092% ( 2) 00:43:31.461 878.778 - 882.502: 0.0105% ( 1) 00:43:31.461 886.225 - 889.949: 0.0132% ( 2) 00:43:31.461 889.949 - 893.673: 0.0171% ( 3) 00:43:31.461 893.673 - 897.396: 0.0237% ( 5) 00:43:31.461 897.396 - 901.120: 0.0303% ( 5) 00:43:31.461 901.120 - 904.844: 0.0356% ( 4) 00:43:31.461 904.844 - 908.567: 0.0461% ( 8) 00:43:31.461 908.567 - 912.291: 0.0553% ( 7) 00:43:31.461 912.291 - 916.015: 0.0619% ( 5) 00:43:31.461 916.015 - 919.738: 0.0646% ( 2) 00:43:31.461 919.738 - 923.462: 0.0790% ( 11) 00:43:31.461 923.462 - 927.185: 0.0830% ( 3) 00:43:31.461 927.185 - 930.909: 0.0975% ( 11) 00:43:31.461 930.909 - 934.633: 0.1093% ( 9) 00:43:31.461 934.633 - 938.356: 0.1331% ( 18) 00:43:31.461 938.356 - 942.080: 0.1502% ( 13) 00:43:31.461 942.080 - 945.804: 0.1634% ( 10) 00:43:31.461 945.804 - 949.527: 0.1779% ( 11) 00:43:31.461 949.527 - 953.251: 0.2055% ( 21) 00:43:31.461 953.251 - 960.698: 0.2727% ( 51) 00:43:31.461 960.698 - 968.145: 0.3439% ( 54) 00:43:31.461 968.145 - 975.593: 0.4084% ( 49) 00:43:31.461 975.593 - 983.040: 0.4901% ( 62) 00:43:31.461 983.040 - 990.487: 0.5876% ( 74) 00:43:31.461 990.487 - 997.935: 0.6877% ( 76) 00:43:31.461 997.935 - 1005.382: 0.8050% ( 89) 00:43:31.461 1005.382 - 1012.829: 0.9262% ( 92) 00:43:31.461 1012.829 - 1020.276: 1.0540% ( 97) 00:43:31.461 1020.276 - 1027.724: 1.2213% ( 127) 00:43:31.461 1027.724 - 1035.171: 1.3675% ( 111) 00:43:31.461 1035.171 - 1042.618: 1.5164% ( 113) 00:43:31.461 1042.618 - 1050.065: 1.6784% ( 123) 00:43:31.461 1050.065 - 1057.513: 1.8998% ( 168) 00:43:31.461 1057.513 - 1064.960: 2.1132% ( 162) 00:43:31.461 1064.960 - 1072.407: 2.3148% ( 153) 00:43:31.461 1072.407 - 1079.855: 2.5374% ( 169) 00:43:31.461 1079.855 - 1087.302: 2.7482% ( 160) 00:43:31.461 1087.302 - 1094.749: 2.9985% ( 190) 00:43:31.461 1094.749 - 1102.196: 3.2739% ( 209) 00:43:31.461 1102.196 - 1109.644: 3.5374% ( 200) 00:43:31.461 1109.644 - 1117.091: 3.8272% ( 220) 00:43:31.461 1117.091 - 1124.538: 4.0999% ( 207) 00:43:31.461 1124.538 - 1131.985: 4.3858% ( 217) 00:43:31.461 1131.985 - 1139.433: 4.7046% ( 242) 00:43:31.461 1139.433 - 1146.880: 5.0432% ( 257) 00:43:31.461 1146.880 - 1154.327: 5.3475% ( 231) 00:43:31.461 1154.327 - 1161.775: 5.6809% ( 253) 00:43:31.461 1161.775 - 1169.222: 6.0234% ( 260) 00:43:31.461 1169.222 - 1176.669: 6.4015% ( 287) 00:43:31.461 1176.669 - 1184.116: 6.7612% ( 273) 00:43:31.461 1184.116 - 1191.564: 7.1406% ( 288) 00:43:31.461 1191.564 - 1199.011: 7.5134% ( 283) 00:43:31.461 1199.011 - 1206.458: 7.9495% ( 331) 00:43:31.461 1206.458 - 1213.905: 8.3961% ( 339) 00:43:31.461 1213.905 - 1221.353: 8.8783% ( 366) 00:43:31.461 1221.353 - 1228.800: 9.3434% ( 353) 00:43:31.461 1228.800 - 1236.247: 9.7860% ( 336) 00:43:31.461 1236.247 - 1243.695: 10.2537% ( 355) 00:43:31.461 1243.695 - 1251.142: 10.7372% ( 367) 00:43:31.461 1251.142 - 1258.589: 11.2774% ( 410) 00:43:31.461 1258.589 - 1266.036: 11.7886% ( 388) 00:43:31.461 1266.036 - 1273.484: 12.2708% ( 366) 00:43:31.461 1273.484 - 1280.931: 12.8241% ( 420) 00:43:31.461 1280.931 - 1288.378: 13.3366% ( 389) 00:43:31.461 1288.378 - 1295.825: 13.9149% ( 439) 00:43:31.461 1295.825 - 1303.273: 14.4709% ( 422) 00:43:31.461 1303.273 - 1310.720: 15.0097% ( 409) 00:43:31.461 1310.720 - 1318.167: 15.5947% ( 444) 00:43:31.461 1318.167 - 1325.615: 16.1441% ( 417) 00:43:31.461 1325.615 - 1333.062: 16.7290% ( 444) 00:43:31.461 1333.062 - 1340.509: 17.3008% ( 434) 00:43:31.461 1340.509 - 1347.956: 17.9556% ( 497) 00:43:31.461 1347.956 - 1355.404: 18.5577% ( 457) 00:43:31.461 1355.404 - 1362.851: 19.1637% ( 460) 00:43:31.461 1362.851 - 1370.298: 19.7499% ( 445) 00:43:31.461 1370.298 - 1377.745: 20.4298% ( 516) 00:43:31.461 1377.745 - 1385.193: 21.0648% ( 482) 00:43:31.461 1385.193 - 1392.640: 21.6774% ( 465) 00:43:31.461 1392.640 - 1400.087: 22.3361% ( 500) 00:43:31.461 1400.087 - 1407.535: 23.0409% ( 535) 00:43:31.461 1407.535 - 1414.982: 23.7287% ( 522) 00:43:31.461 1414.982 - 1422.429: 24.4124% ( 519) 00:43:31.461 1422.429 - 1429.876: 25.1199% ( 537) 00:43:31.461 1429.876 - 1437.324: 25.7918% ( 510) 00:43:31.461 1437.324 - 1444.771: 26.5217% ( 554) 00:43:31.461 1444.771 - 1452.218: 27.2528% ( 555) 00:43:31.461 1452.218 - 1459.665: 27.9814% ( 553) 00:43:31.461 1459.665 - 1467.113: 28.6770% ( 528) 00:43:31.461 1467.113 - 1474.560: 29.3937% ( 544) 00:43:31.461 1474.560 - 1482.007: 30.1157% ( 548) 00:43:31.461 1482.007 - 1489.455: 30.8192% ( 534) 00:43:31.461 1489.455 - 1496.902: 31.5701% ( 570) 00:43:31.461 1496.902 - 1504.349: 32.3329% ( 579) 00:43:31.461 1504.349 - 1511.796: 33.0931% ( 577) 00:43:31.461 1511.796 - 1519.244: 33.8375% ( 565) 00:43:31.461 1519.244 - 1526.691: 34.6095% ( 586) 00:43:31.461 1526.691 - 1534.138: 35.3789% ( 584) 00:43:31.461 1534.138 - 1541.585: 36.1298% ( 570) 00:43:31.461 1541.585 - 1549.033: 36.9216% ( 601) 00:43:31.461 1549.033 - 1556.480: 37.7042% ( 594) 00:43:31.461 1556.480 - 1563.927: 38.5131% ( 614) 00:43:31.461 1563.927 - 1571.375: 39.2865% ( 587) 00:43:31.461 1571.375 - 1578.822: 40.1007% ( 618) 00:43:31.461 1578.822 - 1586.269: 40.8793% ( 591) 00:43:31.461 1586.269 - 1593.716: 41.6566% ( 590) 00:43:31.461 1593.716 - 1601.164: 42.4945% ( 636) 00:43:31.461 1601.164 - 1608.611: 43.2388% ( 565) 00:43:31.461 1608.611 - 1616.058: 44.0438% ( 611) 00:43:31.461 1616.058 - 1623.505: 44.8856% ( 639) 00:43:31.461 1623.505 - 1630.953: 45.6656% ( 592) 00:43:31.461 1630.953 - 1638.400: 46.4877% ( 624) 00:43:31.461 1638.400 - 1645.847: 47.2847% ( 605) 00:43:31.461 1645.847 - 1653.295: 48.0739% ( 599) 00:43:31.461 1653.295 - 1660.742: 48.9026% ( 629) 00:43:31.461 1660.742 - 1668.189: 49.6970% ( 603) 00:43:31.461 1668.189 - 1675.636: 50.5059% ( 614) 00:43:31.461 1675.636 - 1683.084: 51.2858% ( 592) 00:43:31.461 1683.084 - 1690.531: 52.0776% ( 601) 00:43:31.461 1690.531 - 1697.978: 52.8681% ( 600) 00:43:31.461 1697.978 - 1705.425: 53.6441% ( 589) 00:43:31.461 1705.425 - 1712.873: 54.4398% ( 604) 00:43:31.461 1712.873 - 1720.320: 55.2527% ( 617) 00:43:31.461 1720.320 - 1727.767: 56.0260% ( 587) 00:43:31.461 1727.767 - 1735.215: 56.8046% ( 591) 00:43:31.461 1735.215 - 1742.662: 57.6122% ( 613) 00:43:31.461 1742.662 - 1750.109: 58.3988% ( 597) 00:43:31.461 1750.109 - 1757.556: 59.1616% ( 579) 00:43:31.461 1757.556 - 1765.004: 59.9441% ( 594) 00:43:31.461 1765.004 - 1772.451: 60.7452% ( 608) 00:43:31.461 1772.451 - 1779.898: 61.4908% ( 566) 00:43:31.461 1779.898 - 1787.345: 62.2418% ( 570) 00:43:31.461 1787.345 - 1794.793: 63.0138% ( 586) 00:43:31.462 1794.793 - 1802.240: 63.7661% ( 571) 00:43:31.462 1802.240 - 1809.687: 64.5315% ( 581) 00:43:31.462 1809.687 - 1817.135: 65.2364% ( 535) 00:43:31.462 1817.135 - 1824.582: 66.0044% ( 583) 00:43:31.462 1824.582 - 1832.029: 66.7527% ( 568) 00:43:31.462 1832.029 - 1839.476: 67.5011% ( 568) 00:43:31.462 1839.476 - 1846.924: 68.2322% ( 555) 00:43:31.462 1846.924 - 1854.371: 68.9634% ( 555) 00:43:31.462 1854.371 - 1861.818: 69.6893% ( 551) 00:43:31.462 1861.818 - 1869.265: 70.4008% ( 540) 00:43:31.462 1869.265 - 1876.713: 71.1214% ( 547) 00:43:31.462 1876.713 - 1884.160: 71.8328% ( 540) 00:43:31.462 1884.160 - 1891.607: 72.5061% ( 511) 00:43:31.462 1891.607 - 1899.055: 73.2280% ( 548) 00:43:31.462 1899.055 - 1906.502: 73.9250% ( 529) 00:43:31.462 1906.502 - 1921.396: 75.2714% ( 1022) 00:43:31.462 1921.396 - 1936.291: 76.6112% ( 1017) 00:43:31.462 1936.291 - 1951.185: 77.9208% ( 994) 00:43:31.462 1951.185 - 1966.080: 79.1895% ( 963) 00:43:31.462 1966.080 - 1980.975: 80.3884% ( 910) 00:43:31.462 1980.975 - 1995.869: 81.5952% ( 916) 00:43:31.462 1995.869 - 2010.764: 82.7374% ( 867) 00:43:31.462 2010.764 - 2025.658: 83.9178% ( 896) 00:43:31.462 2025.658 - 2040.553: 85.0074% ( 827) 00:43:31.462 2040.553 - 2055.447: 86.1101% ( 837) 00:43:31.462 2055.447 - 2070.342: 87.1140% ( 762) 00:43:31.462 2070.342 - 2085.236: 88.0955% ( 745) 00:43:31.462 2085.236 - 2100.131: 89.0243% ( 705) 00:43:31.462 2100.131 - 2115.025: 89.9373% ( 693) 00:43:31.462 2115.025 - 2129.920: 90.7633% ( 627) 00:43:31.462 2129.920 - 2144.815: 91.5933% ( 630) 00:43:31.462 2144.815 - 2159.709: 92.3206% ( 552) 00:43:31.462 2159.709 - 2174.604: 92.9938% ( 511) 00:43:31.462 2174.604 - 2189.498: 93.6314% ( 484) 00:43:31.462 2189.498 - 2204.393: 94.2111% ( 440) 00:43:31.462 2204.393 - 2219.287: 94.7737% ( 427) 00:43:31.462 2219.287 - 2234.182: 95.3217% ( 416) 00:43:31.462 2234.182 - 2249.076: 95.7762% ( 345) 00:43:31.462 2249.076 - 2263.971: 96.2005% ( 322) 00:43:31.462 2263.971 - 2278.865: 96.5759% ( 285) 00:43:31.462 2278.865 - 2293.760: 96.8934% ( 241) 00:43:31.462 2293.760 - 2308.655: 97.1978% ( 231) 00:43:31.462 2308.655 - 2323.549: 97.4428% ( 186) 00:43:31.462 2323.549 - 2338.444: 97.6562% ( 162) 00:43:31.462 2338.444 - 2353.338: 97.8644% ( 158) 00:43:31.462 2353.338 - 2368.233: 98.0344% ( 129) 00:43:31.462 2368.233 - 2383.127: 98.1977% ( 124) 00:43:31.462 2383.127 - 2398.022: 98.3189% ( 92) 00:43:31.462 2398.022 - 2412.916: 98.4362% ( 89) 00:43:31.462 2412.916 - 2427.811: 98.5376% ( 77) 00:43:31.462 2427.811 - 2442.705: 98.6378% ( 76) 00:43:31.462 2442.705 - 2457.600: 98.7155% ( 59) 00:43:31.462 2457.600 - 2472.495: 98.7721% ( 43) 00:43:31.462 2472.495 - 2487.389: 98.8341% ( 47) 00:43:31.462 2487.389 - 2502.284: 98.8802% ( 35) 00:43:31.462 2502.284 - 2517.178: 98.9171% ( 28) 00:43:31.462 2517.178 - 2532.073: 98.9632% ( 35) 00:43:31.462 2532.073 - 2546.967: 98.9987% ( 27) 00:43:31.462 2546.967 - 2561.862: 99.0264% ( 21) 00:43:31.462 2561.862 - 2576.756: 99.0554% ( 22) 00:43:31.462 2576.756 - 2591.651: 99.0791% ( 18) 00:43:31.462 2591.651 - 2606.545: 99.1120% ( 25) 00:43:31.462 2606.545 - 2621.440: 99.1318% ( 15) 00:43:31.462 2621.440 - 2636.335: 99.1568% ( 19) 00:43:31.462 2636.335 - 2651.229: 99.1766% ( 15) 00:43:31.462 2651.229 - 2666.124: 99.2003% ( 18) 00:43:31.462 2666.124 - 2681.018: 99.2214% ( 16) 00:43:31.462 2681.018 - 2695.913: 99.2491% ( 21) 00:43:31.462 2695.913 - 2710.807: 99.2714% ( 17) 00:43:31.462 2710.807 - 2725.702: 99.2925% ( 16) 00:43:31.462 2725.702 - 2740.596: 99.3149% ( 17) 00:43:31.462 2740.596 - 2755.491: 99.3413% ( 20) 00:43:31.462 2755.491 - 2770.385: 99.3624% ( 16) 00:43:31.462 2770.385 - 2785.280: 99.3821% ( 15) 00:43:31.462 2785.280 - 2800.175: 99.4006% ( 14) 00:43:31.462 2800.175 - 2815.069: 99.4190% ( 14) 00:43:31.462 2815.069 - 2829.964: 99.4361% ( 13) 00:43:31.462 2829.964 - 2844.858: 99.4612% ( 19) 00:43:31.462 2844.858 - 2859.753: 99.4849% ( 18) 00:43:31.462 2859.753 - 2874.647: 99.5020% ( 13) 00:43:31.462 2874.647 - 2889.542: 99.5204% ( 14) 00:43:31.462 2889.542 - 2904.436: 99.5389% ( 14) 00:43:31.462 2904.436 - 2919.331: 99.5613% ( 17) 00:43:31.462 2919.331 - 2934.225: 99.5771% ( 12) 00:43:31.462 2934.225 - 2949.120: 99.5942% ( 13) 00:43:31.462 2949.120 - 2964.015: 99.6127% ( 14) 00:43:31.462 2964.015 - 2978.909: 99.6245% ( 9) 00:43:31.462 2978.909 - 2993.804: 99.6377% ( 10) 00:43:31.462 2993.804 - 3008.698: 99.6535% ( 12) 00:43:31.462 3008.698 - 3023.593: 99.6627% ( 7) 00:43:31.462 3023.593 - 3038.487: 99.6720% ( 7) 00:43:31.462 3038.487 - 3053.382: 99.6825% ( 8) 00:43:31.462 3053.382 - 3068.276: 99.6930% ( 8) 00:43:31.462 3068.276 - 3083.171: 99.7036% ( 8) 00:43:31.462 3083.171 - 3098.065: 99.7141% ( 8) 00:43:31.462 3098.065 - 3112.960: 99.7233% ( 7) 00:43:31.462 3112.960 - 3127.855: 99.7286% ( 4) 00:43:31.462 3127.855 - 3142.749: 99.7378% ( 7) 00:43:31.462 3142.749 - 3157.644: 99.7497% ( 9) 00:43:31.462 3157.644 - 3172.538: 99.7536% ( 3) 00:43:31.462 3172.538 - 3187.433: 99.7589% ( 4) 00:43:31.462 3187.433 - 3202.327: 99.7655% ( 5) 00:43:31.462 3202.327 - 3217.222: 99.7708% ( 4) 00:43:31.462 3217.222 - 3232.116: 99.7747% ( 3) 00:43:31.462 3232.116 - 3247.011: 99.7813% ( 5) 00:43:31.462 3247.011 - 3261.905: 99.7866% ( 4) 00:43:31.462 3261.905 - 3276.800: 99.7918% ( 4) 00:43:31.462 3276.800 - 3291.695: 99.7945% ( 2) 00:43:31.462 3291.695 - 3306.589: 99.7984% ( 3) 00:43:31.462 3306.589 - 3321.484: 99.7997% ( 1) 00:43:31.462 3321.484 - 3336.378: 99.8037% ( 3) 00:43:31.462 3336.378 - 3351.273: 99.8090% ( 4) 00:43:31.462 3351.273 - 3366.167: 99.8129% ( 3) 00:43:31.462 3366.167 - 3381.062: 99.8156% ( 2) 00:43:31.462 3381.062 - 3395.956: 99.8182% ( 2) 00:43:31.462 3395.956 - 3410.851: 99.8208% ( 2) 00:43:31.462 3410.851 - 3425.745: 99.8235% ( 2) 00:43:31.462 3440.640 - 3455.535: 99.8274% ( 3) 00:43:31.462 3455.535 - 3470.429: 99.8287% ( 1) 00:43:31.462 3485.324 - 3500.218: 99.8314% ( 2) 00:43:31.462 3515.113 - 3530.007: 99.8340% ( 2) 00:43:31.462 3530.007 - 3544.902: 99.8353% ( 1) 00:43:31.462 3559.796 - 3574.691: 99.8380% ( 2) 00:43:31.462 3574.691 - 3589.585: 99.8393% ( 1) 00:43:31.462 3589.585 - 3604.480: 99.8406% ( 1) 00:43:31.462 3604.480 - 3619.375: 99.8419% ( 1) 00:43:31.462 3619.375 - 3634.269: 99.8432% ( 1) 00:43:31.462 3634.269 - 3649.164: 99.8445% ( 1) 00:43:31.462 3649.164 - 3664.058: 99.8459% ( 1) 00:43:31.462 3664.058 - 3678.953: 99.8472% ( 1) 00:43:31.462 3678.953 - 3693.847: 99.8485% ( 1) 00:43:31.462 3693.847 - 3708.742: 99.8498% ( 1) 00:43:31.462 3708.742 - 3723.636: 99.8511% ( 1) 00:43:31.462 3723.636 - 3738.531: 99.8524% ( 1) 00:43:31.462 3738.531 - 3753.425: 99.8551% ( 2) 00:43:31.462 3768.320 - 3783.215: 99.8564% ( 1) 00:43:31.462 3783.215 - 3798.109: 99.8590% ( 2) 00:43:31.462 3813.004 - 3842.793: 99.8630% ( 3) 00:43:31.462 3842.793 - 3872.582: 99.8656% ( 2) 00:43:31.462 3872.582 - 3902.371: 99.8683% ( 2) 00:43:31.462 3902.371 - 3932.160: 99.8696% ( 1) 00:43:31.462 3932.160 - 3961.949: 99.8735% ( 3) 00:43:31.462 3961.949 - 3991.738: 99.8762% ( 2) 00:43:31.462 3991.738 - 4021.527: 99.8775% ( 1) 00:43:31.462 4021.527 - 4051.316: 99.8788% ( 1) 00:43:31.462 4051.316 - 4081.105: 99.8801% ( 1) 00:43:31.462 4081.105 - 4110.895: 99.8814% ( 1) 00:43:31.462 4110.895 - 4140.684: 99.8827% ( 1) 00:43:31.462 4140.684 - 4170.473: 99.8841% ( 1) 00:43:31.462 4200.262 - 4230.051: 99.8854% ( 1) 00:43:31.462 4230.051 - 4259.840: 99.8867% ( 1) 00:43:31.462 4259.840 - 4289.629: 99.8880% ( 1) 00:43:31.462 4319.418 - 4349.207: 99.8893% ( 1) 00:43:31.462 4349.207 - 4378.996: 99.8907% ( 1) 00:43:31.462 4378.996 - 4408.785: 99.8920% ( 1) 00:43:31.462 4408.785 - 4438.575: 99.8933% ( 1) 00:43:31.462 4468.364 - 4498.153: 99.8946% ( 1) 00:43:31.462 4498.153 - 4527.942: 99.8959% ( 1) 00:43:31.462 4527.942 - 4557.731: 99.8972% ( 1) 00:43:31.462 4557.731 - 4587.520: 99.8986% ( 1) 00:43:31.462 4617.309 - 4647.098: 99.8999% ( 1) 00:43:31.462 4647.098 - 4676.887: 99.9012% ( 1) 00:43:31.462 4676.887 - 4706.676: 99.9025% ( 1) 00:43:31.462 4706.676 - 4736.465: 99.9038% ( 1) 00:43:31.462 4766.255 - 4796.044: 99.9051% ( 1) 00:43:31.462 4796.044 - 4825.833: 99.9065% ( 1) 00:43:31.462 4825.833 - 4855.622: 99.9078% ( 1) 00:43:31.462 4885.411 - 4915.200: 99.9091% ( 1) 00:43:31.462 4915.200 - 4944.989: 99.9104% ( 1) 00:43:31.462 4944.989 - 4974.778: 99.9117% ( 1) 00:43:31.462 5004.567 - 5034.356: 99.9130% ( 1) 00:43:31.462 5034.356 - 5064.145: 99.9144% ( 1) 00:43:31.462 5064.145 - 5093.935: 99.9157% ( 1) 00:43:31.462 5213.091 - 5242.880: 99.9170% ( 1) 00:43:31.462 5242.880 - 5272.669: 99.9183% ( 1) 00:43:31.462 5302.458 - 5332.247: 99.9196% ( 1) 00:43:31.462 5332.247 - 5362.036: 99.9210% ( 1) 00:43:31.462 5362.036 - 5391.825: 99.9223% ( 1) 00:43:31.462 5421.615 - 5451.404: 99.9236% ( 1) 00:43:31.462 5481.193 - 5510.982: 99.9249% ( 1) 00:43:31.462 5510.982 - 5540.771: 99.9262% ( 1) 00:43:31.462 5570.560 - 5600.349: 99.9275% ( 1) 00:43:31.462 5600.349 - 5630.138: 99.9289% ( 1) 00:43:31.462 5659.927 - 5689.716: 99.9302% ( 1) 00:43:31.462 5689.716 - 5719.505: 99.9315% ( 1) 00:43:31.462 5719.505 - 5749.295: 99.9328% ( 1) 00:43:31.462 5749.295 - 5779.084: 99.9341% ( 1) 00:43:31.462 5808.873 - 5838.662: 99.9354% ( 1) 00:43:31.462 5838.662 - 5868.451: 99.9368% ( 1) 00:43:31.462 5868.451 - 5898.240: 99.9381% ( 1) 00:43:31.462 5928.029 - 5957.818: 99.9394% ( 1) 00:43:31.463 5957.818 - 5987.607: 99.9407% ( 1) 00:43:31.463 5987.607 - 6017.396: 99.9420% ( 1) 00:43:31.463 6047.185 - 6076.975: 99.9433% ( 1) 00:43:31.463 6076.975 - 6106.764: 99.9447% ( 1) 00:43:31.463 6106.764 - 6136.553: 99.9460% ( 1) 00:43:31.463 6166.342 - 6196.131: 99.9473% ( 1) 00:43:31.463 6196.131 - 6225.920: 99.9486% ( 1) 00:43:31.463 6225.920 - 6255.709: 99.9499% ( 1) 00:43:31.463 6285.498 - 6315.287: 99.9513% ( 1) 00:43:31.463 6315.287 - 6345.076: 99.9526% ( 1) 00:43:31.463 6345.076 - 6374.865: 99.9539% ( 1) 00:43:31.463 6404.655 - 6434.444: 99.9552% ( 1) 00:43:31.463 6434.444 - 6464.233: 99.9565% ( 1) 00:43:31.463 6494.022 - 6523.811: 99.9578% ( 1) 00:43:31.463 6523.811 - 6553.600: 99.9592% ( 1) 00:43:31.463 6553.600 - 6583.389: 99.9605% ( 1) 00:43:31.463 6583.389 - 6613.178: 99.9618% ( 1) 00:43:31.463 6642.967 - 6672.756: 99.9631% ( 1) 00:43:31.463 6672.756 - 6702.545: 99.9644% ( 1) 00:43:31.463 6702.545 - 6732.335: 99.9657% ( 1) 00:43:31.463 6762.124 - 6791.913: 99.9671% ( 1) 00:43:31.463 6791.913 - 6821.702: 99.9684% ( 1) 00:43:31.463 6821.702 - 6851.491: 99.9697% ( 1) 00:43:31.463 6851.491 - 6881.280: 99.9710% ( 1) 00:43:31.463 6911.069 - 6940.858: 99.9723% ( 1) 00:43:31.463 6940.858 - 6970.647: 99.9737% ( 1) 00:43:31.463 6970.647 - 7000.436: 99.9750% ( 1) 00:43:31.463 7000.436 - 7030.225: 99.9763% ( 1) 00:43:31.463 7060.015 - 7089.804: 99.9776% ( 1) 00:43:31.463 7119.593 - 7149.382: 99.9789% ( 1) 00:43:31.463 7149.382 - 7179.171: 99.9802% ( 1) 00:43:31.463 7208.960 - 7238.749: 99.9816% ( 1) 00:43:31.463 7238.749 - 7268.538: 99.9829% ( 1) 00:43:31.463 7268.538 - 7298.327: 99.9842% ( 1) 00:43:31.463 7298.327 - 7328.116: 99.9855% ( 1) 00:43:31.463 7357.905 - 7387.695: 99.9868% ( 1) 00:43:31.463 7387.695 - 7417.484: 99.9881% ( 1) 00:43:31.463 7417.484 - 7447.273: 99.9895% ( 1) 00:43:31.463 7477.062 - 7506.851: 99.9908% ( 1) 00:43:31.463 7506.851 - 7536.640: 99.9921% ( 1) 00:43:31.463 7566.429 - 7596.218: 99.9934% ( 1) 00:43:31.463 7596.218 - 7626.007: 99.9947% ( 1) 00:43:31.463 7626.007 - 7685.585: 99.9960% ( 1) 00:43:31.463 7685.585 - 7745.164: 99.9987% ( 2) 00:43:31.463 7804.742 - 7864.320: 100.0000% ( 1) 00:43:31.463 00:43:31.463 11:55:06 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:43:32.838 Initializing NVMe Controllers 00:43:32.838 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:32.838 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:32.838 Initialization complete. Launching workers. 00:43:32.838 ======================================================== 00:43:32.838 Latency(us) 00:43:32.838 Device Information : IOPS MiB/s Average min max 00:43:32.838 PCIE (0000:00:10.0) NSID 1 from core 0: 81257.05 952.23 1574.98 525.43 10667.66 00:43:32.838 ======================================================== 00:43:32.838 Total : 81257.05 952.23 1574.98 525.43 10667.66 00:43:32.838 00:43:32.838 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:32.838 ================================================================================= 00:43:32.838 1.00000% : 1042.618us 00:43:32.838 10.00000% : 1243.695us 00:43:32.838 25.00000% : 1370.298us 00:43:32.838 50.00000% : 1526.691us 00:43:32.838 75.00000% : 1735.215us 00:43:32.838 90.00000% : 1966.080us 00:43:32.838 95.00000% : 2115.025us 00:43:32.838 98.00000% : 2308.655us 00:43:32.838 99.00000% : 2487.389us 00:43:32.838 99.50000% : 2859.753us 00:43:32.838 99.90000% : 4736.465us 00:43:32.838 99.99000% : 10009.135us 00:43:32.838 99.99900% : 10724.073us 00:43:32.838 99.99990% : 10724.073us 00:43:32.838 99.99999% : 10724.073us 00:43:32.838 00:43:32.838 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:43:32.838 ============================================================================== 00:43:32.838 Range in us Cumulative IO count 00:43:32.838 525.033 - 528.756: 0.0012% ( 1) 00:43:32.838 569.716 - 573.440: 0.0025% ( 1) 00:43:32.838 577.164 - 580.887: 0.0037% ( 1) 00:43:32.838 625.571 - 629.295: 0.0049% ( 1) 00:43:32.838 633.018 - 636.742: 0.0074% ( 2) 00:43:32.838 636.742 - 640.465: 0.0086% ( 1) 00:43:32.838 640.465 - 644.189: 0.0098% ( 1) 00:43:32.838 670.255 - 673.978: 0.0111% ( 1) 00:43:32.838 673.978 - 677.702: 0.0135% ( 2) 00:43:32.838 681.425 - 685.149: 0.0148% ( 1) 00:43:32.838 696.320 - 700.044: 0.0160% ( 1) 00:43:32.838 703.767 - 707.491: 0.0197% ( 3) 00:43:32.838 707.491 - 711.215: 0.0209% ( 1) 00:43:32.838 711.215 - 714.938: 0.0221% ( 1) 00:43:32.838 714.938 - 718.662: 0.0234% ( 1) 00:43:32.838 726.109 - 729.833: 0.0246% ( 1) 00:43:32.838 733.556 - 737.280: 0.0258% ( 1) 00:43:32.838 741.004 - 744.727: 0.0271% ( 1) 00:43:32.838 744.727 - 748.451: 0.0283% ( 1) 00:43:32.838 748.451 - 752.175: 0.0308% ( 2) 00:43:32.838 755.898 - 759.622: 0.0332% ( 2) 00:43:32.838 759.622 - 763.345: 0.0357% ( 2) 00:43:32.838 763.345 - 767.069: 0.0381% ( 2) 00:43:32.838 767.069 - 770.793: 0.0394% ( 1) 00:43:32.838 770.793 - 774.516: 0.0418% ( 2) 00:43:32.838 778.240 - 781.964: 0.0443% ( 2) 00:43:32.838 785.687 - 789.411: 0.0467% ( 2) 00:43:32.838 793.135 - 796.858: 0.0492% ( 2) 00:43:32.838 800.582 - 804.305: 0.0541% ( 4) 00:43:32.838 804.305 - 808.029: 0.0554% ( 1) 00:43:32.838 808.029 - 811.753: 0.0578% ( 2) 00:43:32.838 811.753 - 815.476: 0.0590% ( 1) 00:43:32.838 815.476 - 819.200: 0.0627% ( 3) 00:43:32.838 819.200 - 822.924: 0.0664% ( 3) 00:43:32.838 822.924 - 826.647: 0.0689% ( 2) 00:43:32.838 826.647 - 830.371: 0.0726% ( 3) 00:43:32.838 830.371 - 834.095: 0.0787% ( 5) 00:43:32.838 834.095 - 837.818: 0.0812% ( 2) 00:43:32.838 837.818 - 841.542: 0.0824% ( 1) 00:43:32.838 841.542 - 845.265: 0.0836% ( 1) 00:43:32.838 845.265 - 848.989: 0.0898% ( 5) 00:43:32.838 848.989 - 852.713: 0.0972% ( 6) 00:43:32.838 852.713 - 856.436: 0.1033% ( 5) 00:43:32.838 856.436 - 860.160: 0.1046% ( 1) 00:43:32.838 860.160 - 863.884: 0.1095% ( 4) 00:43:32.838 863.884 - 867.607: 0.1132% ( 3) 00:43:32.838 867.607 - 871.331: 0.1205% ( 6) 00:43:32.838 871.331 - 875.055: 0.1230% ( 2) 00:43:32.838 875.055 - 878.778: 0.1341% ( 9) 00:43:32.838 878.778 - 882.502: 0.1402% ( 5) 00:43:32.838 882.502 - 886.225: 0.1476% ( 6) 00:43:32.838 886.225 - 889.949: 0.1538% ( 5) 00:43:32.838 889.949 - 893.673: 0.1574% ( 3) 00:43:32.838 893.673 - 897.396: 0.1673% ( 8) 00:43:32.838 897.396 - 901.120: 0.1759% ( 7) 00:43:32.838 901.120 - 904.844: 0.1857% ( 8) 00:43:32.838 904.844 - 908.567: 0.1907% ( 4) 00:43:32.838 908.567 - 912.291: 0.1993% ( 7) 00:43:32.838 912.291 - 916.015: 0.2079% ( 7) 00:43:32.838 916.015 - 919.738: 0.2214% ( 11) 00:43:32.838 919.738 - 923.462: 0.2362% ( 12) 00:43:32.838 923.462 - 927.185: 0.2472% ( 9) 00:43:32.838 927.185 - 930.909: 0.2522% ( 4) 00:43:32.838 930.909 - 934.633: 0.2718% ( 16) 00:43:32.838 934.633 - 938.356: 0.2817% ( 8) 00:43:32.838 938.356 - 942.080: 0.2964% ( 12) 00:43:32.838 942.080 - 945.804: 0.3112% ( 12) 00:43:32.838 945.804 - 949.527: 0.3260% ( 12) 00:43:32.838 949.527 - 953.251: 0.3395% ( 11) 00:43:32.838 953.251 - 960.698: 0.3862% ( 38) 00:43:32.838 960.698 - 968.145: 0.4182% ( 26) 00:43:32.838 968.145 - 975.593: 0.4625% ( 36) 00:43:32.838 975.593 - 983.040: 0.4871% ( 20) 00:43:32.838 983.040 - 990.487: 0.5412% ( 44) 00:43:32.838 990.487 - 997.935: 0.5966% ( 45) 00:43:32.838 997.935 - 1005.382: 0.6618% ( 53) 00:43:32.838 1005.382 - 1012.829: 0.7196% ( 47) 00:43:32.838 1012.829 - 1020.276: 0.7823% ( 51) 00:43:32.838 1020.276 - 1027.724: 0.8721% ( 73) 00:43:32.838 1027.724 - 1035.171: 0.9521% ( 65) 00:43:32.838 1035.171 - 1042.618: 1.0505% ( 80) 00:43:32.838 1042.618 - 1050.065: 1.1624% ( 91) 00:43:32.838 1050.065 - 1057.513: 1.2805% ( 96) 00:43:32.838 1057.513 - 1064.960: 1.3973% ( 95) 00:43:32.838 1064.960 - 1072.407: 1.5339% ( 111) 00:43:32.838 1072.407 - 1079.855: 1.6962% ( 132) 00:43:32.838 1079.855 - 1087.302: 1.8414% ( 118) 00:43:32.838 1087.302 - 1094.749: 2.0345% ( 157) 00:43:32.838 1094.749 - 1102.196: 2.2239% ( 154) 00:43:32.838 1102.196 - 1109.644: 2.4441% ( 179) 00:43:32.838 1109.644 - 1117.091: 2.6852% ( 196) 00:43:32.838 1117.091 - 1124.538: 2.9472% ( 213) 00:43:32.838 1124.538 - 1131.985: 3.2289% ( 229) 00:43:32.838 1131.985 - 1139.433: 3.4995% ( 220) 00:43:32.838 1139.433 - 1146.880: 3.7935% ( 239) 00:43:32.838 1146.880 - 1154.327: 4.1391% ( 281) 00:43:32.838 1154.327 - 1161.775: 4.4774% ( 275) 00:43:32.838 1161.775 - 1169.222: 4.8796% ( 327) 00:43:32.838 1169.222 - 1176.669: 5.3298% ( 366) 00:43:32.838 1176.669 - 1184.116: 5.7910% ( 375) 00:43:32.838 1184.116 - 1191.564: 6.3015% ( 415) 00:43:32.838 1191.564 - 1199.011: 6.8021% ( 407) 00:43:32.839 1199.011 - 1206.458: 7.3458% ( 442) 00:43:32.839 1206.458 - 1213.905: 7.8956% ( 447) 00:43:32.839 1213.905 - 1221.353: 8.4762% ( 472) 00:43:32.839 1221.353 - 1228.800: 9.0568% ( 472) 00:43:32.839 1228.800 - 1236.247: 9.6644% ( 494) 00:43:32.839 1236.247 - 1243.695: 10.3914% ( 591) 00:43:32.839 1243.695 - 1251.142: 11.0692% ( 551) 00:43:32.839 1251.142 - 1258.589: 11.8232% ( 613) 00:43:32.839 1258.589 - 1266.036: 12.5304% ( 575) 00:43:32.839 1266.036 - 1273.484: 13.3017% ( 627) 00:43:32.839 1273.484 - 1280.931: 14.1110% ( 658) 00:43:32.839 1280.931 - 1288.378: 14.9782% ( 705) 00:43:32.839 1288.378 - 1295.825: 15.7814% ( 653) 00:43:32.839 1295.825 - 1303.273: 16.6499% ( 706) 00:43:32.839 1303.273 - 1310.720: 17.6007% ( 773) 00:43:32.839 1310.720 - 1318.167: 18.5146% ( 743) 00:43:32.839 1318.167 - 1325.615: 19.5909% ( 875) 00:43:32.839 1325.615 - 1333.062: 20.5417% ( 773) 00:43:32.839 1333.062 - 1340.509: 21.5700% ( 836) 00:43:32.839 1340.509 - 1347.956: 22.6291% ( 861) 00:43:32.839 1347.956 - 1355.404: 23.7177% ( 885) 00:43:32.839 1355.404 - 1362.851: 24.8456% ( 917) 00:43:32.839 1362.851 - 1370.298: 26.0080% ( 945) 00:43:32.839 1370.298 - 1377.745: 27.2159% ( 982) 00:43:32.839 1377.745 - 1385.193: 28.2713% ( 858) 00:43:32.839 1385.193 - 1392.640: 29.4903% ( 991) 00:43:32.839 1392.640 - 1400.087: 30.8113% ( 1074) 00:43:32.839 1400.087 - 1407.535: 31.9356% ( 914) 00:43:32.839 1407.535 - 1414.982: 33.1742% ( 1007) 00:43:32.839 1414.982 - 1422.429: 34.3883% ( 987) 00:43:32.839 1422.429 - 1429.876: 35.6565% ( 1031) 00:43:32.839 1429.876 - 1437.324: 36.8496% ( 970) 00:43:32.839 1437.324 - 1444.771: 38.0354% ( 964) 00:43:32.839 1444.771 - 1452.218: 39.2716% ( 1005) 00:43:32.839 1452.218 - 1459.665: 40.3454% ( 873) 00:43:32.839 1459.665 - 1467.113: 41.5570% ( 985) 00:43:32.839 1467.113 - 1474.560: 42.6689% ( 904) 00:43:32.839 1474.560 - 1482.007: 44.0097% ( 1090) 00:43:32.839 1482.007 - 1489.455: 45.1967% ( 965) 00:43:32.839 1489.455 - 1496.902: 46.2004% ( 816) 00:43:32.839 1496.902 - 1504.349: 47.1635% ( 783) 00:43:32.839 1504.349 - 1511.796: 48.1279% ( 784) 00:43:32.839 1511.796 - 1519.244: 49.1094% ( 798) 00:43:32.839 1519.244 - 1526.691: 50.1538% ( 849) 00:43:32.839 1526.691 - 1534.138: 51.3272% ( 954) 00:43:32.839 1534.138 - 1541.585: 52.4097% ( 880) 00:43:32.839 1541.585 - 1549.033: 53.4084% ( 812) 00:43:32.839 1549.033 - 1556.480: 54.3863% ( 795) 00:43:32.839 1556.480 - 1563.927: 55.3384% ( 774) 00:43:32.839 1563.927 - 1571.375: 56.3335% ( 809) 00:43:32.839 1571.375 - 1578.822: 57.3532% ( 829) 00:43:32.839 1578.822 - 1586.269: 58.2450% ( 725) 00:43:32.839 1586.269 - 1593.716: 59.1638% ( 747) 00:43:32.839 1593.716 - 1601.164: 60.0876% ( 751) 00:43:32.839 1601.164 - 1608.611: 61.1368% ( 853) 00:43:32.839 1608.611 - 1616.058: 62.1085% ( 790) 00:43:32.839 1616.058 - 1623.505: 63.1442% ( 842) 00:43:32.839 1623.505 - 1630.953: 64.1233% ( 796) 00:43:32.839 1630.953 - 1638.400: 65.0053% ( 717) 00:43:32.839 1638.400 - 1645.847: 65.7987% ( 645) 00:43:32.839 1645.847 - 1653.295: 66.7679% ( 788) 00:43:32.839 1653.295 - 1660.742: 67.6929% ( 752) 00:43:32.839 1660.742 - 1668.189: 68.5687% ( 712) 00:43:32.839 1668.189 - 1675.636: 69.3449% ( 631) 00:43:32.839 1675.636 - 1683.084: 70.0558% ( 578) 00:43:32.839 1683.084 - 1690.531: 70.8640% ( 657) 00:43:32.839 1690.531 - 1697.978: 71.7152% ( 692) 00:43:32.839 1697.978 - 1705.425: 72.4421% ( 591) 00:43:32.839 1705.425 - 1712.873: 73.1863% ( 605) 00:43:32.839 1712.873 - 1720.320: 73.9366% ( 610) 00:43:32.839 1720.320 - 1727.767: 74.7337% ( 648) 00:43:32.839 1727.767 - 1735.215: 75.4656% ( 595) 00:43:32.839 1735.215 - 1742.662: 76.2528% ( 640) 00:43:32.839 1742.662 - 1750.109: 76.9675% ( 581) 00:43:32.839 1750.109 - 1757.556: 77.7104% ( 604) 00:43:32.839 1757.556 - 1765.004: 78.4632% ( 612) 00:43:32.839 1765.004 - 1772.451: 79.1188% ( 533) 00:43:32.839 1772.451 - 1779.898: 79.7683% ( 528) 00:43:32.839 1779.898 - 1787.345: 80.3956% ( 510) 00:43:32.839 1787.345 - 1794.793: 80.9540% ( 454) 00:43:32.839 1794.793 - 1802.240: 81.5162% ( 457) 00:43:32.839 1802.240 - 1809.687: 82.0684% ( 449) 00:43:32.839 1809.687 - 1817.135: 82.5125% ( 361) 00:43:32.839 1817.135 - 1824.582: 82.9750% ( 376) 00:43:32.839 1824.582 - 1832.029: 83.4645% ( 398) 00:43:32.839 1832.029 - 1839.476: 83.9356% ( 383) 00:43:32.839 1839.476 - 1846.924: 84.3834% ( 364) 00:43:32.839 1846.924 - 1854.371: 84.8520% ( 381) 00:43:32.839 1854.371 - 1861.818: 85.3957% ( 442) 00:43:32.839 1861.818 - 1869.265: 85.8471% ( 367) 00:43:32.839 1869.265 - 1876.713: 86.2752% ( 348) 00:43:32.839 1876.713 - 1884.160: 86.7254% ( 366) 00:43:32.839 1884.160 - 1891.607: 87.0550% ( 268) 00:43:32.839 1891.607 - 1899.055: 87.4093% ( 288) 00:43:32.839 1899.055 - 1906.502: 87.7820% ( 303) 00:43:32.839 1906.502 - 1921.396: 88.5323% ( 610) 00:43:32.839 1921.396 - 1936.291: 89.1842% ( 530) 00:43:32.839 1936.291 - 1951.185: 89.8620% ( 551) 00:43:32.839 1951.185 - 1966.080: 90.4819% ( 504) 00:43:32.839 1966.080 - 1980.975: 91.0588% ( 469) 00:43:32.839 1980.975 - 1995.869: 91.6468% ( 478) 00:43:32.839 1995.869 - 2010.764: 92.2483% ( 489) 00:43:32.839 2010.764 - 2025.658: 92.7600% ( 416) 00:43:32.839 2025.658 - 2040.553: 93.2495% ( 398) 00:43:32.839 2040.553 - 2055.447: 93.7059% ( 371) 00:43:32.839 2055.447 - 2070.342: 94.1278% ( 343) 00:43:32.839 2070.342 - 2085.236: 94.5140% ( 314) 00:43:32.839 2085.236 - 2100.131: 94.8670% ( 287) 00:43:32.839 2100.131 - 2115.025: 95.2114% ( 280) 00:43:32.839 2115.025 - 2129.920: 95.5362% ( 264) 00:43:32.839 2129.920 - 2144.815: 95.8437% ( 250) 00:43:32.839 2144.815 - 2159.709: 96.1106% ( 217) 00:43:32.839 2159.709 - 2174.604: 96.4107% ( 244) 00:43:32.839 2174.604 - 2189.498: 96.6543% ( 198) 00:43:32.839 2189.498 - 2204.393: 96.9507% ( 241) 00:43:32.839 2204.393 - 2219.287: 97.1611% ( 171) 00:43:32.839 2219.287 - 2234.182: 97.3394% ( 145) 00:43:32.839 2234.182 - 2249.076: 97.4969% ( 128) 00:43:32.839 2249.076 - 2263.971: 97.6346% ( 112) 00:43:32.839 2263.971 - 2278.865: 97.7785% ( 117) 00:43:32.839 2278.865 - 2293.760: 97.9360% ( 128) 00:43:32.839 2293.760 - 2308.655: 98.0897% ( 125) 00:43:32.839 2308.655 - 2323.549: 98.2041% ( 93) 00:43:32.839 2323.549 - 2338.444: 98.3062% ( 83) 00:43:32.839 2338.444 - 2353.338: 98.4206% ( 93) 00:43:32.839 2353.338 - 2368.233: 98.5018% ( 66) 00:43:32.839 2368.233 - 2383.127: 98.5805% ( 64) 00:43:32.839 2383.127 - 2398.022: 98.6543% ( 60) 00:43:32.839 2398.022 - 2412.916: 98.7318% ( 63) 00:43:32.839 2412.916 - 2427.811: 98.7970% ( 53) 00:43:32.839 2427.811 - 2442.705: 98.8524% ( 45) 00:43:32.839 2442.705 - 2457.600: 98.9077% ( 45) 00:43:32.839 2457.600 - 2472.495: 98.9545% ( 38) 00:43:32.839 2472.495 - 2487.389: 99.0024% ( 39) 00:43:32.839 2487.389 - 2502.284: 99.0553% ( 43) 00:43:32.839 2502.284 - 2517.178: 99.1131% ( 47) 00:43:32.839 2517.178 - 2532.073: 99.1377% ( 20) 00:43:32.839 2532.073 - 2546.967: 99.1648% ( 22) 00:43:32.839 2546.967 - 2561.862: 99.1820% ( 14) 00:43:32.839 2561.862 - 2576.756: 99.2079% ( 21) 00:43:32.839 2576.756 - 2591.651: 99.2300% ( 18) 00:43:32.839 2591.651 - 2606.545: 99.2497% ( 16) 00:43:32.839 2606.545 - 2621.440: 99.2743% ( 20) 00:43:32.839 2621.440 - 2636.335: 99.2890% ( 12) 00:43:32.839 2636.335 - 2651.229: 99.3063% ( 14) 00:43:32.839 2651.229 - 2666.124: 99.3296% ( 19) 00:43:32.839 2666.124 - 2681.018: 99.3456% ( 13) 00:43:32.839 2681.018 - 2695.913: 99.3628% ( 14) 00:43:32.839 2695.913 - 2710.807: 99.3837% ( 17) 00:43:32.839 2710.807 - 2725.702: 99.3936% ( 8) 00:43:32.839 2725.702 - 2740.596: 99.4047% ( 9) 00:43:32.839 2740.596 - 2755.491: 99.4231% ( 15) 00:43:32.839 2755.491 - 2770.385: 99.4330% ( 8) 00:43:32.839 2770.385 - 2785.280: 99.4489% ( 13) 00:43:32.839 2785.280 - 2800.175: 99.4612% ( 10) 00:43:32.839 2800.175 - 2815.069: 99.4760% ( 12) 00:43:32.839 2815.069 - 2829.964: 99.4858% ( 8) 00:43:32.839 2829.964 - 2844.858: 99.4994% ( 11) 00:43:32.839 2844.858 - 2859.753: 99.5129% ( 11) 00:43:32.839 2859.753 - 2874.647: 99.5264% ( 11) 00:43:32.839 2874.647 - 2889.542: 99.5375% ( 9) 00:43:32.839 2889.542 - 2904.436: 99.5473% ( 8) 00:43:32.839 2904.436 - 2919.331: 99.5584% ( 9) 00:43:32.839 2919.331 - 2934.225: 99.5707% ( 10) 00:43:32.839 2934.225 - 2949.120: 99.5806% ( 8) 00:43:32.839 2949.120 - 2964.015: 99.5892% ( 7) 00:43:32.839 2964.015 - 2978.909: 99.5990% ( 8) 00:43:32.839 2978.909 - 2993.804: 99.6064% ( 6) 00:43:32.839 2993.804 - 3008.698: 99.6125% ( 5) 00:43:32.839 3008.698 - 3023.593: 99.6199% ( 6) 00:43:32.839 3023.593 - 3038.487: 99.6261% ( 5) 00:43:32.839 3038.487 - 3053.382: 99.6322% ( 5) 00:43:32.839 3053.382 - 3068.276: 99.6396% ( 6) 00:43:32.839 3068.276 - 3083.171: 99.6544% ( 12) 00:43:32.839 3083.171 - 3098.065: 99.6654% ( 9) 00:43:32.839 3098.065 - 3112.960: 99.6728% ( 6) 00:43:32.839 3112.960 - 3127.855: 99.6802% ( 6) 00:43:32.839 3127.855 - 3142.749: 99.6863% ( 5) 00:43:32.839 3142.749 - 3157.644: 99.6937% ( 6) 00:43:32.839 3157.644 - 3172.538: 99.6962% ( 2) 00:43:32.839 3172.538 - 3187.433: 99.7023% ( 5) 00:43:32.839 3187.433 - 3202.327: 99.7060% ( 3) 00:43:32.839 3202.327 - 3217.222: 99.7085% ( 2) 00:43:32.839 3217.222 - 3232.116: 99.7134% ( 4) 00:43:32.839 3232.116 - 3247.011: 99.7171% ( 3) 00:43:32.839 3247.011 - 3261.905: 99.7183% ( 1) 00:43:32.839 3261.905 - 3276.800: 99.7208% ( 2) 00:43:32.839 3276.800 - 3291.695: 99.7232% ( 2) 00:43:32.839 3291.695 - 3306.589: 99.7282% ( 4) 00:43:32.839 3306.589 - 3321.484: 99.7306% ( 2) 00:43:32.839 3321.484 - 3336.378: 99.7343% ( 3) 00:43:32.839 3336.378 - 3351.273: 99.7355% ( 1) 00:43:32.839 3351.273 - 3366.167: 99.7392% ( 3) 00:43:32.839 3366.167 - 3381.062: 99.7429% ( 3) 00:43:32.840 3381.062 - 3395.956: 99.7491% ( 5) 00:43:32.840 3395.956 - 3410.851: 99.7552% ( 5) 00:43:32.840 3410.851 - 3425.745: 99.7589% ( 3) 00:43:32.840 3425.745 - 3440.640: 99.7675% ( 7) 00:43:32.840 3440.640 - 3455.535: 99.7700% ( 2) 00:43:32.840 3455.535 - 3470.429: 99.7724% ( 2) 00:43:32.840 3470.429 - 3485.324: 99.7737% ( 1) 00:43:32.840 3485.324 - 3500.218: 99.7774% ( 3) 00:43:32.840 3500.218 - 3515.113: 99.7823% ( 4) 00:43:32.840 3515.113 - 3530.007: 99.7847% ( 2) 00:43:32.840 3530.007 - 3544.902: 99.7860% ( 1) 00:43:32.840 3559.796 - 3574.691: 99.7884% ( 2) 00:43:32.840 3589.585 - 3604.480: 99.7921% ( 3) 00:43:32.840 3604.480 - 3619.375: 99.7970% ( 4) 00:43:32.840 3619.375 - 3634.269: 99.8007% ( 3) 00:43:32.840 3634.269 - 3649.164: 99.8069% ( 5) 00:43:32.840 3649.164 - 3664.058: 99.8118% ( 4) 00:43:32.840 3664.058 - 3678.953: 99.8167% ( 4) 00:43:32.840 3678.953 - 3693.847: 99.8253% ( 7) 00:43:32.840 3693.847 - 3708.742: 99.8290% ( 3) 00:43:32.840 3708.742 - 3723.636: 99.8339% ( 4) 00:43:32.840 3723.636 - 3738.531: 99.8364% ( 2) 00:43:32.840 3738.531 - 3753.425: 99.8376% ( 1) 00:43:32.840 3753.425 - 3768.320: 99.8389% ( 1) 00:43:32.840 3768.320 - 3783.215: 99.8426% ( 3) 00:43:32.840 3798.109 - 3813.004: 99.8450% ( 2) 00:43:32.840 3813.004 - 3842.793: 99.8475% ( 2) 00:43:32.840 3842.793 - 3872.582: 99.8499% ( 2) 00:43:32.840 3872.582 - 3902.371: 99.8524% ( 2) 00:43:32.840 3902.371 - 3932.160: 99.8536% ( 1) 00:43:32.840 3932.160 - 3961.949: 99.8573% ( 3) 00:43:32.840 3961.949 - 3991.738: 99.8585% ( 1) 00:43:32.840 3991.738 - 4021.527: 99.8598% ( 1) 00:43:32.840 4021.527 - 4051.316: 99.8610% ( 1) 00:43:32.840 4051.316 - 4081.105: 99.8635% ( 2) 00:43:32.840 4081.105 - 4110.895: 99.8647% ( 1) 00:43:32.840 4110.895 - 4140.684: 99.8659% ( 1) 00:43:32.840 4140.684 - 4170.473: 99.8684% ( 2) 00:43:32.840 4170.473 - 4200.262: 99.8708% ( 2) 00:43:32.840 4200.262 - 4230.051: 99.8721% ( 1) 00:43:32.840 4230.051 - 4259.840: 99.8733% ( 1) 00:43:32.840 4259.840 - 4289.629: 99.8745% ( 1) 00:43:32.840 4289.629 - 4319.418: 99.8770% ( 2) 00:43:32.840 4319.418 - 4349.207: 99.8782% ( 1) 00:43:32.840 4349.207 - 4378.996: 99.8819% ( 3) 00:43:32.840 4378.996 - 4408.785: 99.8831% ( 1) 00:43:32.840 4408.785 - 4438.575: 99.8868% ( 3) 00:43:32.840 4438.575 - 4468.364: 99.8881% ( 1) 00:43:32.840 4468.364 - 4498.153: 99.8893% ( 1) 00:43:32.840 4498.153 - 4527.942: 99.8905% ( 1) 00:43:32.840 4527.942 - 4557.731: 99.8930% ( 2) 00:43:32.840 4557.731 - 4587.520: 99.8942% ( 1) 00:43:32.840 4617.309 - 4647.098: 99.8954% ( 1) 00:43:32.840 4647.098 - 4676.887: 99.8967% ( 1) 00:43:32.840 4676.887 - 4706.676: 99.8979% ( 1) 00:43:32.840 4706.676 - 4736.465: 99.9016% ( 3) 00:43:32.840 4766.255 - 4796.044: 99.9028% ( 1) 00:43:32.840 4796.044 - 4825.833: 99.9041% ( 1) 00:43:32.840 4825.833 - 4855.622: 99.9053% ( 1) 00:43:32.840 4855.622 - 4885.411: 99.9065% ( 1) 00:43:32.840 4885.411 - 4915.200: 99.9077% ( 1) 00:43:32.840 4915.200 - 4944.989: 99.9102% ( 2) 00:43:32.840 4944.989 - 4974.778: 99.9114% ( 1) 00:43:32.840 4974.778 - 5004.567: 99.9127% ( 1) 00:43:32.840 5004.567 - 5034.356: 99.9139% ( 1) 00:43:32.840 5034.356 - 5064.145: 99.9151% ( 1) 00:43:32.840 5064.145 - 5093.935: 99.9164% ( 1) 00:43:32.840 5093.935 - 5123.724: 99.9176% ( 1) 00:43:32.840 5123.724 - 5153.513: 99.9200% ( 2) 00:43:32.840 5153.513 - 5183.302: 99.9213% ( 1) 00:43:32.840 5213.091 - 5242.880: 99.9225% ( 1) 00:43:32.840 5510.982 - 5540.771: 99.9237% ( 1) 00:43:32.840 5540.771 - 5570.560: 99.9250% ( 1) 00:43:32.840 5600.349 - 5630.138: 99.9262% ( 1) 00:43:32.840 5868.451 - 5898.240: 99.9274% ( 1) 00:43:32.840 5957.818 - 5987.607: 99.9287% ( 1) 00:43:32.840 6047.185 - 6076.975: 99.9299% ( 1) 00:43:32.840 6076.975 - 6106.764: 99.9311% ( 1) 00:43:32.840 6315.287 - 6345.076: 99.9360% ( 4) 00:43:32.840 6345.076 - 6374.865: 99.9373% ( 1) 00:43:32.840 6404.655 - 6434.444: 99.9385% ( 1) 00:43:32.840 6494.022 - 6523.811: 99.9397% ( 1) 00:43:32.840 6553.600 - 6583.389: 99.9410% ( 1) 00:43:32.840 6583.389 - 6613.178: 99.9422% ( 1) 00:43:32.840 6642.967 - 6672.756: 99.9434% ( 1) 00:43:32.840 6702.545 - 6732.335: 99.9459% ( 2) 00:43:32.840 6762.124 - 6791.913: 99.9471% ( 1) 00:43:32.840 6791.913 - 6821.702: 99.9496% ( 2) 00:43:32.840 6881.280 - 6911.069: 99.9508% ( 1) 00:43:32.840 6911.069 - 6940.858: 99.9520% ( 1) 00:43:32.840 6970.647 - 7000.436: 99.9533% ( 1) 00:43:32.840 7000.436 - 7030.225: 99.9557% ( 2) 00:43:32.840 7030.225 - 7060.015: 99.9582% ( 2) 00:43:32.840 7089.804 - 7119.593: 99.9594% ( 1) 00:43:32.840 7447.273 - 7477.062: 99.9606% ( 1) 00:43:32.840 8281.367 - 8340.945: 99.9619% ( 1) 00:43:32.840 8400.524 - 8460.102: 99.9631% ( 1) 00:43:32.840 8757.993 - 8817.571: 99.9643% ( 1) 00:43:32.840 8996.305 - 9055.884: 99.9656% ( 1) 00:43:32.840 9055.884 - 9115.462: 99.9680% ( 2) 00:43:32.840 9115.462 - 9175.040: 99.9692% ( 1) 00:43:32.840 9234.618 - 9294.196: 99.9717% ( 2) 00:43:32.840 9294.196 - 9353.775: 99.9729% ( 1) 00:43:32.840 9353.775 - 9413.353: 99.9742% ( 1) 00:43:32.840 9413.353 - 9472.931: 99.9779% ( 3) 00:43:32.840 9472.931 - 9532.509: 99.9791% ( 1) 00:43:32.840 9592.087 - 9651.665: 99.9803% ( 1) 00:43:32.840 9651.665 - 9711.244: 99.9815% ( 1) 00:43:32.840 9770.822 - 9830.400: 99.9865% ( 4) 00:43:32.840 9889.978 - 9949.556: 99.9877% ( 1) 00:43:32.840 9949.556 - 10009.135: 99.9902% ( 2) 00:43:32.840 10009.135 - 10068.713: 99.9926% ( 2) 00:43:32.840 10068.713 - 10128.291: 99.9963% ( 3) 00:43:32.840 10128.291 - 10187.869: 99.9988% ( 2) 00:43:32.840 10664.495 - 10724.073: 100.0000% ( 1) 00:43:32.840 00:43:32.840 11:55:07 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:43:32.840 00:43:32.840 real 0m2.658s 00:43:32.840 user 0m2.268s 00:43:32.840 sys 0m0.239s 00:43:32.840 11:55:07 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:32.840 ************************************ 00:43:32.840 END TEST nvme_perf 00:43:32.840 11:55:07 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:43:32.840 ************************************ 00:43:32.840 11:55:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:32.840 11:55:07 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:43:32.840 11:55:07 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:43:32.840 11:55:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:32.840 11:55:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:32.840 ************************************ 00:43:32.840 START TEST nvme_hello_world 00:43:32.840 ************************************ 00:43:32.840 11:55:07 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:43:33.097 Initializing NVMe Controllers 00:43:33.097 Attached to 0000:00:10.0 00:43:33.097 Namespace ID: 1 size: 5GB 00:43:33.097 Initialization complete. 00:43:33.097 INFO: using host memory buffer for IO 00:43:33.097 Hello world! 00:43:33.097 00:43:33.097 real 0m0.301s 00:43:33.098 user 0m0.079s 00:43:33.098 sys 0m0.127s 00:43:33.098 11:55:07 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:33.098 11:55:07 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:33.098 ************************************ 00:43:33.098 END TEST nvme_hello_world 00:43:33.098 ************************************ 00:43:33.098 11:55:07 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:33.098 11:55:07 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:43:33.098 11:55:07 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:33.098 11:55:07 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:33.098 11:55:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:33.098 ************************************ 00:43:33.098 START TEST nvme_sgl 00:43:33.098 ************************************ 00:43:33.098 11:55:07 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:43:33.354 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:43:33.354 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:43:33.354 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:43:33.354 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:43:33.354 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:43:33.354 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:43:33.354 NVMe Readv/Writev Request test 00:43:33.354 Attached to 0000:00:10.0 00:43:33.354 0000:00:10.0: build_io_request_2 test passed 00:43:33.354 0000:00:10.0: build_io_request_4 test passed 00:43:33.354 0000:00:10.0: build_io_request_5 test passed 00:43:33.354 0000:00:10.0: build_io_request_6 test passed 00:43:33.354 0000:00:10.0: build_io_request_7 test passed 00:43:33.354 0000:00:10.0: build_io_request_10 test passed 00:43:33.354 Cleaning up... 00:43:33.354 00:43:33.354 real 0m0.323s 00:43:33.354 user 0m0.123s 00:43:33.354 sys 0m0.133s 00:43:33.354 11:55:08 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:33.354 ************************************ 00:43:33.354 11:55:08 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:43:33.354 END TEST nvme_sgl 00:43:33.354 ************************************ 00:43:33.611 11:55:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:33.611 11:55:08 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:43:33.611 11:55:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:33.611 11:55:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:33.611 11:55:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:33.611 ************************************ 00:43:33.611 START TEST nvme_e2edp 00:43:33.611 ************************************ 00:43:33.611 11:55:08 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:43:33.869 NVMe Write/Read with End-to-End data protection test 00:43:33.869 Attached to 0000:00:10.0 00:43:33.869 Cleaning up... 00:43:33.869 00:43:33.869 real 0m0.252s 00:43:33.869 user 0m0.088s 00:43:33.869 sys 0m0.092s 00:43:33.869 11:55:08 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:33.869 ************************************ 00:43:33.869 END TEST nvme_e2edp 00:43:33.869 ************************************ 00:43:33.869 11:55:08 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:43:33.869 11:55:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:33.869 11:55:08 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:43:33.869 11:55:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:33.869 11:55:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:33.869 11:55:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:33.869 ************************************ 00:43:33.869 START TEST nvme_reserve 00:43:33.869 ************************************ 00:43:33.869 11:55:08 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:43:34.128 ===================================================== 00:43:34.128 NVMe Controller at PCI bus 0, device 16, function 0 00:43:34.128 ===================================================== 00:43:34.128 Reservations: Not Supported 00:43:34.128 Reservation test passed 00:43:34.128 00:43:34.128 real 0m0.303s 00:43:34.128 user 0m0.082s 00:43:34.128 sys 0m0.139s 00:43:34.128 11:55:08 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:34.128 ************************************ 00:43:34.128 END TEST nvme_reserve 00:43:34.128 ************************************ 00:43:34.128 11:55:08 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:43:34.128 11:55:08 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:34.128 11:55:08 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:43:34.128 11:55:08 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:34.128 11:55:08 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:34.128 11:55:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:34.128 ************************************ 00:43:34.128 START TEST nvme_err_injection 00:43:34.128 ************************************ 00:43:34.128 11:55:08 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:43:34.387 NVMe Error Injection test 00:43:34.387 Attached to 0000:00:10.0 00:43:34.387 0000:00:10.0: get features failed as expected 00:43:34.387 0000:00:10.0: get features successfully as expected 00:43:34.387 0000:00:10.0: read failed as expected 00:43:34.387 0000:00:10.0: read successfully as expected 00:43:34.387 Cleaning up... 00:43:34.387 00:43:34.387 real 0m0.262s 00:43:34.387 user 0m0.086s 00:43:34.387 sys 0m0.099s 00:43:34.387 11:55:09 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:34.387 ************************************ 00:43:34.387 END TEST nvme_err_injection 00:43:34.387 ************************************ 00:43:34.387 11:55:09 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:43:34.387 11:55:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:34.387 11:55:09 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:43:34.387 11:55:09 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:43:34.387 11:55:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:34.387 11:55:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:34.387 ************************************ 00:43:34.387 START TEST nvme_overhead 00:43:34.387 ************************************ 00:43:34.387 11:55:09 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:43:35.766 Initializing NVMe Controllers 00:43:35.766 Attached to 0000:00:10.0 00:43:35.766 Initialization complete. Launching workers. 00:43:35.766 submit (in ns) avg, min, max = 13805.8, 11110.0, 67375.5 00:43:35.766 complete (in ns) avg, min, max = 9378.0, 7391.8, 76750.9 00:43:35.766 00:43:35.766 Submit histogram 00:43:35.766 ================ 00:43:35.766 Range in us Cumulative Count 00:43:35.766 11.055 - 11.113: 0.0122% ( 1) 00:43:35.766 11.113 - 11.171: 0.1470% ( 11) 00:43:35.766 11.171 - 11.229: 0.6859% ( 44) 00:43:35.766 11.229 - 11.287: 1.9718% ( 105) 00:43:35.766 11.287 - 11.345: 4.3601% ( 195) 00:43:35.766 11.345 - 11.404: 7.6424% ( 268) 00:43:35.766 11.404 - 11.462: 11.9290% ( 350) 00:43:35.766 11.462 - 11.520: 15.9706% ( 330) 00:43:35.766 11.520 - 11.578: 18.5915% ( 214) 00:43:35.766 11.578 - 11.636: 20.0245% ( 117) 00:43:35.766 11.636 - 11.695: 21.4574% ( 117) 00:43:35.766 11.695 - 11.753: 24.5193% ( 250) 00:43:35.766 11.753 - 11.811: 27.8996% ( 276) 00:43:35.766 11.811 - 11.869: 33.7661% ( 479) 00:43:35.766 11.869 - 11.927: 42.1555% ( 685) 00:43:35.766 11.927 - 11.985: 48.9406% ( 554) 00:43:35.766 11.985 - 12.044: 54.4764% ( 452) 00:43:35.766 12.044 - 12.102: 59.0814% ( 376) 00:43:35.766 12.102 - 12.160: 64.2621% ( 423) 00:43:35.766 12.160 - 12.218: 69.4182% ( 421) 00:43:35.766 12.218 - 12.276: 72.7985% ( 276) 00:43:35.766 12.276 - 12.335: 75.2235% ( 198) 00:43:35.766 12.335 - 12.393: 76.9137% ( 138) 00:43:35.766 12.393 - 12.451: 78.5058% ( 130) 00:43:35.766 12.451 - 12.509: 79.9388% ( 117) 00:43:35.766 12.509 - 12.567: 81.1635% ( 100) 00:43:35.766 12.567 - 12.625: 81.9963% ( 68) 00:43:35.766 12.625 - 12.684: 82.5720% ( 47) 00:43:35.766 12.684 - 12.742: 83.1721% ( 49) 00:43:35.766 12.742 - 12.800: 83.7722% ( 49) 00:43:35.766 12.800 - 12.858: 84.2376% ( 38) 00:43:35.766 12.858 - 12.916: 84.5805% ( 28) 00:43:35.766 12.916 - 12.975: 84.8377% ( 21) 00:43:35.766 12.975 - 13.033: 85.0459% ( 17) 00:43:35.766 13.033 - 13.091: 85.3521% ( 25) 00:43:35.766 13.091 - 13.149: 85.5113% ( 13) 00:43:35.766 13.149 - 13.207: 85.6950% ( 15) 00:43:35.766 13.207 - 13.265: 85.9277% ( 19) 00:43:35.766 13.265 - 13.324: 86.1359% ( 17) 00:43:35.766 13.324 - 13.382: 86.2584% ( 10) 00:43:35.766 13.382 - 13.440: 86.4054% ( 12) 00:43:35.766 13.440 - 13.498: 86.4911% ( 7) 00:43:35.766 13.498 - 13.556: 86.5524% ( 5) 00:43:35.766 13.556 - 13.615: 86.6626% ( 9) 00:43:35.766 13.615 - 13.673: 86.7361% ( 6) 00:43:35.766 13.673 - 13.731: 86.8096% ( 6) 00:43:35.766 13.731 - 13.789: 86.8953% ( 7) 00:43:35.766 13.789 - 13.847: 86.9810% ( 7) 00:43:35.766 13.847 - 13.905: 87.0790% ( 8) 00:43:35.766 13.905 - 13.964: 87.1402% ( 5) 00:43:35.766 13.964 - 14.022: 87.1647% ( 2) 00:43:35.766 14.022 - 14.080: 87.2015% ( 3) 00:43:35.766 14.080 - 14.138: 87.2382% ( 3) 00:43:35.766 14.138 - 14.196: 87.2627% ( 2) 00:43:35.766 14.196 - 14.255: 87.2994% ( 3) 00:43:35.766 14.255 - 14.313: 87.3239% ( 2) 00:43:35.766 14.371 - 14.429: 87.3729% ( 4) 00:43:35.766 14.487 - 14.545: 87.3852% ( 1) 00:43:35.766 14.545 - 14.604: 87.3974% ( 1) 00:43:35.766 14.604 - 14.662: 87.4219% ( 2) 00:43:35.766 14.662 - 14.720: 87.4709% ( 4) 00:43:35.766 14.720 - 14.778: 87.5077% ( 3) 00:43:35.766 14.836 - 14.895: 87.5444% ( 3) 00:43:35.766 14.895 - 15.011: 87.6056% ( 5) 00:43:35.766 15.011 - 15.127: 87.6424% ( 3) 00:43:35.766 15.127 - 15.244: 87.6669% ( 2) 00:43:35.766 15.244 - 15.360: 87.6791% ( 1) 00:43:35.766 15.360 - 15.476: 87.7281% ( 4) 00:43:35.766 15.476 - 15.593: 87.7526% ( 2) 00:43:35.766 15.593 - 15.709: 87.7771% ( 2) 00:43:35.766 15.709 - 15.825: 87.8261% ( 4) 00:43:35.766 15.825 - 15.942: 87.8383% ( 1) 00:43:35.766 15.942 - 16.058: 87.9118% ( 6) 00:43:35.766 16.058 - 16.175: 87.9608% ( 4) 00:43:35.766 16.175 - 16.291: 87.9853% ( 2) 00:43:35.766 16.291 - 16.407: 88.0465% ( 5) 00:43:35.766 16.407 - 16.524: 88.0710% ( 2) 00:43:35.766 16.640 - 16.756: 88.0833% ( 1) 00:43:35.766 16.756 - 16.873: 88.1200% ( 3) 00:43:35.766 16.873 - 16.989: 88.1568% ( 3) 00:43:35.766 16.989 - 17.105: 88.1935% ( 3) 00:43:35.766 17.105 - 17.222: 88.2180% ( 2) 00:43:35.766 17.222 - 17.338: 88.2303% ( 1) 00:43:35.766 17.455 - 17.571: 88.2425% ( 1) 00:43:35.766 17.571 - 17.687: 88.2792% ( 3) 00:43:35.766 17.804 - 17.920: 88.2915% ( 1) 00:43:35.766 17.920 - 18.036: 88.3282% ( 3) 00:43:35.766 18.036 - 18.153: 88.3527% ( 2) 00:43:35.766 18.153 - 18.269: 88.3895% ( 3) 00:43:35.766 18.269 - 18.385: 88.4140% ( 2) 00:43:35.766 18.385 - 18.502: 88.4752% ( 5) 00:43:35.766 18.502 - 18.618: 88.4997% ( 2) 00:43:35.766 18.618 - 18.735: 88.5487% ( 4) 00:43:35.766 18.735 - 18.851: 88.5609% ( 1) 00:43:35.766 18.851 - 18.967: 88.5732% ( 1) 00:43:35.766 18.967 - 19.084: 88.5977% ( 2) 00:43:35.766 19.084 - 19.200: 88.6344% ( 3) 00:43:35.766 19.200 - 19.316: 88.6467% ( 1) 00:43:35.766 19.316 - 19.433: 88.6834% ( 3) 00:43:35.766 19.433 - 19.549: 88.7079% ( 2) 00:43:35.766 19.549 - 19.665: 88.7201% ( 1) 00:43:35.766 19.665 - 19.782: 88.7324% ( 1) 00:43:35.766 19.782 - 19.898: 88.7569% ( 2) 00:43:35.766 19.898 - 20.015: 88.7691% ( 1) 00:43:35.766 20.015 - 20.131: 88.8181% ( 4) 00:43:35.766 20.131 - 20.247: 88.8426% ( 2) 00:43:35.766 20.247 - 20.364: 88.8549% ( 1) 00:43:35.766 20.364 - 20.480: 88.8794% ( 2) 00:43:35.766 20.480 - 20.596: 88.8916% ( 1) 00:43:35.766 20.596 - 20.713: 88.9039% ( 1) 00:43:35.766 20.829 - 20.945: 88.9161% ( 1) 00:43:35.766 20.945 - 21.062: 88.9284% ( 1) 00:43:35.766 21.062 - 21.178: 88.9528% ( 2) 00:43:35.766 21.411 - 21.527: 88.9773% ( 2) 00:43:35.766 21.527 - 21.644: 89.0018% ( 2) 00:43:35.766 21.644 - 21.760: 89.0141% ( 1) 00:43:35.766 21.760 - 21.876: 89.0263% ( 1) 00:43:35.766 22.109 - 22.225: 89.0386% ( 1) 00:43:35.766 22.342 - 22.458: 89.0508% ( 1) 00:43:35.766 22.458 - 22.575: 89.0631% ( 1) 00:43:35.766 22.575 - 22.691: 89.0753% ( 1) 00:43:35.766 22.691 - 22.807: 89.1121% ( 3) 00:43:35.766 22.807 - 22.924: 89.1366% ( 2) 00:43:35.766 22.924 - 23.040: 89.1488% ( 1) 00:43:35.766 23.040 - 23.156: 89.1611% ( 1) 00:43:35.766 23.156 - 23.273: 89.1855% ( 2) 00:43:35.766 23.273 - 23.389: 89.2100% ( 2) 00:43:35.766 23.389 - 23.505: 89.2223% ( 1) 00:43:35.766 23.505 - 23.622: 89.2468% ( 2) 00:43:35.766 23.622 - 23.738: 89.2835% ( 3) 00:43:35.766 23.855 - 23.971: 89.2958% ( 1) 00:43:35.766 23.971 - 24.087: 89.3203% ( 2) 00:43:35.766 24.087 - 24.204: 89.3570% ( 3) 00:43:35.766 24.436 - 24.553: 89.3693% ( 1) 00:43:35.766 24.553 - 24.669: 89.3815% ( 1) 00:43:35.766 25.135 - 25.251: 89.3938% ( 1) 00:43:35.766 25.484 - 25.600: 89.4060% ( 1) 00:43:35.766 25.600 - 25.716: 89.4427% ( 3) 00:43:35.766 25.716 - 25.833: 89.5652% ( 10) 00:43:35.766 25.833 - 25.949: 89.6877% ( 10) 00:43:35.766 25.949 - 26.065: 89.9816% ( 24) 00:43:35.766 26.065 - 26.182: 90.3491% ( 30) 00:43:35.766 26.182 - 26.298: 90.8634% ( 42) 00:43:35.766 26.298 - 26.415: 91.4268% ( 46) 00:43:35.766 26.415 - 26.531: 91.9902% ( 46) 00:43:35.766 26.531 - 26.647: 92.4311% ( 36) 00:43:35.766 26.647 - 26.764: 92.7006% ( 22) 00:43:35.766 26.764 - 26.880: 93.0067% ( 25) 00:43:35.766 26.880 - 26.996: 93.2272% ( 18) 00:43:35.766 26.996 - 27.113: 93.4231% ( 16) 00:43:35.766 27.113 - 27.229: 93.6314% ( 17) 00:43:35.766 27.229 - 27.345: 93.8028% ( 14) 00:43:35.766 27.345 - 27.462: 93.9988% ( 16) 00:43:35.766 27.462 - 27.578: 94.2192% ( 18) 00:43:35.766 27.578 - 27.695: 94.6356% ( 34) 00:43:35.766 27.695 - 27.811: 95.1745% ( 44) 00:43:35.766 27.811 - 27.927: 96.1543% ( 80) 00:43:35.766 27.927 - 28.044: 97.0851% ( 76) 00:43:35.766 28.044 - 28.160: 97.9057% ( 67) 00:43:35.766 28.160 - 28.276: 98.5303% ( 51) 00:43:35.766 28.276 - 28.393: 98.7875% ( 21) 00:43:35.766 28.393 - 28.509: 98.9712% ( 15) 00:43:35.766 28.509 - 28.625: 99.1059% ( 11) 00:43:35.766 28.625 - 28.742: 99.1794% ( 6) 00:43:35.766 28.742 - 28.858: 99.2162% ( 3) 00:43:35.766 28.858 - 28.975: 99.2529% ( 3) 00:43:35.766 28.975 - 29.091: 99.2652% ( 1) 00:43:35.766 29.556 - 29.673: 99.2774% ( 1) 00:43:35.766 30.022 - 30.255: 99.2897% ( 1) 00:43:35.766 30.487 - 30.720: 99.3019% ( 1) 00:43:35.766 30.720 - 30.953: 99.3141% ( 1) 00:43:35.766 31.185 - 31.418: 99.3264% ( 1) 00:43:35.767 31.651 - 31.884: 99.3386% ( 1) 00:43:35.767 32.116 - 32.349: 99.3631% ( 2) 00:43:35.767 32.582 - 32.815: 99.3999% ( 3) 00:43:35.767 33.280 - 33.513: 99.4244% ( 2) 00:43:35.767 33.513 - 33.745: 99.5101% ( 7) 00:43:35.767 33.745 - 33.978: 99.5224% ( 1) 00:43:35.767 33.978 - 34.211: 99.5591% ( 3) 00:43:35.767 34.211 - 34.444: 99.5836% ( 2) 00:43:35.767 34.444 - 34.676: 99.5958% ( 1) 00:43:35.767 34.909 - 35.142: 99.6448% ( 4) 00:43:35.767 35.142 - 35.375: 99.6571% ( 1) 00:43:35.767 35.375 - 35.607: 99.6693% ( 1) 00:43:35.767 35.607 - 35.840: 99.6938% ( 2) 00:43:35.767 36.538 - 36.771: 99.7061% ( 1) 00:43:35.767 37.236 - 37.469: 99.7183% ( 1) 00:43:35.767 37.469 - 37.702: 99.7306% ( 1) 00:43:35.767 37.702 - 37.935: 99.7428% ( 1) 00:43:35.767 38.167 - 38.400: 99.7551% ( 1) 00:43:35.767 39.098 - 39.331: 99.7673% ( 1) 00:43:35.767 40.495 - 40.727: 99.7795% ( 1) 00:43:35.767 41.425 - 41.658: 99.8040% ( 2) 00:43:35.767 42.124 - 42.356: 99.8408% ( 3) 00:43:35.767 42.356 - 42.589: 99.8530% ( 1) 00:43:35.767 42.589 - 42.822: 99.8775% ( 2) 00:43:35.767 43.055 - 43.287: 99.9020% ( 2) 00:43:35.767 43.753 - 43.985: 99.9143% ( 1) 00:43:35.767 45.847 - 46.080: 99.9265% ( 1) 00:43:35.767 46.313 - 46.545: 99.9388% ( 1) 00:43:35.767 48.640 - 48.873: 99.9510% ( 1) 00:43:35.767 52.364 - 52.596: 99.9633% ( 1) 00:43:35.767 53.062 - 53.295: 99.9755% ( 1) 00:43:35.767 60.509 - 60.975: 99.9878% ( 1) 00:43:35.767 67.025 - 67.491: 100.0000% ( 1) 00:43:35.767 00:43:35.767 Complete histogram 00:43:35.767 ================== 00:43:35.767 Range in us Cumulative Count 00:43:35.767 7.389 - 7.418: 0.1470% ( 12) 00:43:35.767 7.418 - 7.447: 0.9553% ( 66) 00:43:35.767 7.447 - 7.505: 7.1525% ( 506) 00:43:35.767 7.505 - 7.564: 11.8555% ( 384) 00:43:35.767 7.564 - 7.622: 13.4599% ( 131) 00:43:35.767 7.622 - 7.680: 14.2682% ( 66) 00:43:35.767 7.680 - 7.738: 17.0116% ( 224) 00:43:35.767 7.738 - 7.796: 19.8653% ( 233) 00:43:35.767 7.796 - 7.855: 26.0992% ( 509) 00:43:35.767 7.855 - 7.913: 33.6681% ( 618) 00:43:35.767 7.913 - 7.971: 36.2768% ( 213) 00:43:35.767 7.971 - 8.029: 38.0527% ( 145) 00:43:35.767 8.029 - 8.087: 45.5113% ( 609) 00:43:35.767 8.087 - 8.145: 57.1341% ( 949) 00:43:35.767 8.145 - 8.204: 60.7471% ( 295) 00:43:35.767 8.204 - 8.262: 62.5720% ( 149) 00:43:35.767 8.262 - 8.320: 69.9204% ( 600) 00:43:35.767 8.320 - 8.378: 77.1831% ( 593) 00:43:35.767 8.378 - 8.436: 78.8120% ( 133) 00:43:35.767 8.436 - 8.495: 79.7551% ( 77) 00:43:35.767 8.495 - 8.553: 81.8371% ( 170) 00:43:35.767 8.553 - 8.611: 83.6375% ( 147) 00:43:35.767 8.611 - 8.669: 84.2254% ( 48) 00:43:35.767 8.669 - 8.727: 84.7152% ( 40) 00:43:35.767 8.727 - 8.785: 85.4991% ( 64) 00:43:35.767 8.785 - 8.844: 86.2217% ( 59) 00:43:35.767 8.844 - 8.902: 86.6993% ( 39) 00:43:35.767 8.902 - 8.960: 86.8340% ( 11) 00:43:35.767 8.960 - 9.018: 87.2627% ( 35) 00:43:35.767 9.018 - 9.076: 87.6669% ( 33) 00:43:35.767 9.076 - 9.135: 87.9853% ( 26) 00:43:35.767 9.135 - 9.193: 88.3160% ( 27) 00:43:35.767 9.193 - 9.251: 88.5732% ( 21) 00:43:35.767 9.251 - 9.309: 88.9161% ( 28) 00:43:35.767 9.309 - 9.367: 89.1243% ( 17) 00:43:35.767 9.367 - 9.425: 89.3448% ( 18) 00:43:35.767 9.425 - 9.484: 89.4305% ( 7) 00:43:35.767 9.484 - 9.542: 89.5897% ( 13) 00:43:35.767 9.542 - 9.600: 89.7367% ( 12) 00:43:35.767 9.600 - 9.658: 89.9326% ( 16) 00:43:35.767 9.658 - 9.716: 90.1164% ( 15) 00:43:35.767 9.716 - 9.775: 90.3368% ( 18) 00:43:35.767 9.775 - 9.833: 90.5328% ( 16) 00:43:35.767 9.833 - 9.891: 90.6062% ( 6) 00:43:35.767 9.891 - 9.949: 90.6430% ( 3) 00:43:35.767 9.949 - 10.007: 90.7165% ( 6) 00:43:35.767 10.007 - 10.065: 90.8022% ( 7) 00:43:35.767 10.065 - 10.124: 90.8757% ( 6) 00:43:35.767 10.124 - 10.182: 90.8879% ( 1) 00:43:35.767 10.182 - 10.240: 90.9737% ( 7) 00:43:35.767 10.240 - 10.298: 90.9859% ( 1) 00:43:35.767 10.298 - 10.356: 91.0227% ( 3) 00:43:35.767 10.356 - 10.415: 91.0349% ( 1) 00:43:35.767 10.415 - 10.473: 91.0594% ( 2) 00:43:35.767 10.473 - 10.531: 91.0716% ( 1) 00:43:35.767 10.531 - 10.589: 91.0961% ( 2) 00:43:35.767 10.589 - 10.647: 91.1084% ( 1) 00:43:35.767 10.647 - 10.705: 91.1206% ( 1) 00:43:35.767 10.705 - 10.764: 91.1451% ( 2) 00:43:35.767 10.764 - 10.822: 91.1574% ( 1) 00:43:35.767 10.822 - 10.880: 91.1696% ( 1) 00:43:35.767 10.938 - 10.996: 91.1819% ( 1) 00:43:35.767 11.055 - 11.113: 91.1941% ( 1) 00:43:35.767 11.113 - 11.171: 91.2064% ( 1) 00:43:35.767 11.171 - 11.229: 91.2309% ( 2) 00:43:35.767 11.229 - 11.287: 91.2431% ( 1) 00:43:35.767 11.345 - 11.404: 91.2799% ( 3) 00:43:35.767 11.578 - 11.636: 91.2921% ( 1) 00:43:35.767 11.636 - 11.695: 91.3166% ( 2) 00:43:35.767 11.695 - 11.753: 91.3288% ( 1) 00:43:35.767 11.753 - 11.811: 91.3411% ( 1) 00:43:35.767 11.811 - 11.869: 91.3533% ( 1) 00:43:35.767 11.869 - 11.927: 91.3901% ( 3) 00:43:35.767 11.985 - 12.044: 91.4023% ( 1) 00:43:35.767 12.451 - 12.509: 91.4268% ( 2) 00:43:35.767 12.509 - 12.567: 91.4391% ( 1) 00:43:35.767 12.567 - 12.625: 91.4513% ( 1) 00:43:35.767 12.625 - 12.684: 91.4636% ( 1) 00:43:35.767 12.684 - 12.742: 91.4758% ( 1) 00:43:35.767 12.858 - 12.916: 91.5003% ( 2) 00:43:35.767 12.975 - 13.033: 91.5126% ( 1) 00:43:35.767 13.033 - 13.091: 91.5248% ( 1) 00:43:35.767 13.091 - 13.149: 91.5493% ( 2) 00:43:35.767 13.149 - 13.207: 91.5615% ( 1) 00:43:35.767 13.207 - 13.265: 91.5860% ( 2) 00:43:35.767 13.324 - 13.382: 91.6105% ( 2) 00:43:35.767 13.498 - 13.556: 91.6473% ( 3) 00:43:35.767 13.556 - 13.615: 91.6595% ( 1) 00:43:35.767 13.615 - 13.673: 91.6840% ( 2) 00:43:35.767 13.731 - 13.789: 91.7330% ( 4) 00:43:35.767 13.789 - 13.847: 91.7697% ( 3) 00:43:35.767 13.847 - 13.905: 91.8065% ( 3) 00:43:35.767 13.905 - 13.964: 91.8187% ( 1) 00:43:35.767 13.964 - 14.022: 91.8555% ( 3) 00:43:35.767 14.022 - 14.080: 91.8677% ( 1) 00:43:35.767 14.080 - 14.138: 91.8922% ( 2) 00:43:35.767 14.138 - 14.196: 91.9167% ( 2) 00:43:35.767 14.255 - 14.313: 91.9290% ( 1) 00:43:35.767 14.371 - 14.429: 91.9657% ( 3) 00:43:35.767 14.545 - 14.604: 92.0024% ( 3) 00:43:35.767 14.720 - 14.778: 92.0147% ( 1) 00:43:35.767 14.778 - 14.836: 92.0269% ( 1) 00:43:35.767 14.836 - 14.895: 92.0514% ( 2) 00:43:35.767 14.895 - 15.011: 92.0637% ( 1) 00:43:35.767 15.011 - 15.127: 92.0882% ( 2) 00:43:35.767 15.127 - 15.244: 92.1127% ( 2) 00:43:35.767 15.476 - 15.593: 92.1494% ( 3) 00:43:35.767 15.593 - 15.709: 92.1862% ( 3) 00:43:35.767 15.709 - 15.825: 92.1984% ( 1) 00:43:35.767 15.942 - 16.058: 92.2107% ( 1) 00:43:35.767 16.058 - 16.175: 92.2352% ( 2) 00:43:35.767 16.175 - 16.291: 92.2719% ( 3) 00:43:35.767 16.407 - 16.524: 92.2841% ( 1) 00:43:35.767 16.640 - 16.756: 92.2964% ( 1) 00:43:35.767 16.873 - 16.989: 92.3209% ( 2) 00:43:35.767 17.222 - 17.338: 92.3331% ( 1) 00:43:35.767 17.338 - 17.455: 92.3454% ( 1) 00:43:35.767 17.455 - 17.571: 92.3821% ( 3) 00:43:35.767 17.571 - 17.687: 92.3944% ( 1) 00:43:35.767 17.687 - 17.804: 92.4311% ( 3) 00:43:35.767 17.804 - 17.920: 92.4556% ( 2) 00:43:35.767 18.269 - 18.385: 92.4679% ( 1) 00:43:35.767 18.618 - 18.735: 92.4801% ( 1) 00:43:35.767 18.735 - 18.851: 92.5046% ( 2) 00:43:35.767 18.851 - 18.967: 92.5168% ( 1) 00:43:35.767 19.084 - 19.200: 92.5291% ( 1) 00:43:35.767 19.316 - 19.433: 92.5658% ( 3) 00:43:35.767 19.433 - 19.549: 92.5781% ( 1) 00:43:35.767 19.549 - 19.665: 92.5903% ( 1) 00:43:35.767 19.665 - 19.782: 92.6148% ( 2) 00:43:35.767 19.782 - 19.898: 92.6271% ( 1) 00:43:35.767 20.015 - 20.131: 92.6761% ( 4) 00:43:35.767 20.131 - 20.247: 92.6883% ( 1) 00:43:35.767 20.247 - 20.364: 92.7128% ( 2) 00:43:35.767 20.713 - 20.829: 92.7250% ( 1) 00:43:35.767 20.829 - 20.945: 92.7373% ( 1) 00:43:35.767 20.945 - 21.062: 92.7495% ( 1) 00:43:35.767 21.527 - 21.644: 92.7618% ( 1) 00:43:35.767 21.644 - 21.760: 92.7740% ( 1) 00:43:35.767 21.760 - 21.876: 92.8353% ( 5) 00:43:35.767 21.876 - 21.993: 92.9700% ( 11) 00:43:35.767 21.993 - 22.109: 93.1660% ( 16) 00:43:35.767 22.109 - 22.225: 93.4476% ( 23) 00:43:35.767 22.225 - 22.342: 93.7538% ( 25) 00:43:35.767 22.342 - 22.458: 94.0968% ( 28) 00:43:35.767 22.458 - 22.575: 94.5132% ( 34) 00:43:35.767 22.575 - 22.691: 94.7826% ( 22) 00:43:35.767 22.691 - 22.807: 94.9663% ( 15) 00:43:35.767 22.807 - 22.924: 95.2235% ( 21) 00:43:35.767 22.924 - 23.040: 95.4195% ( 16) 00:43:35.767 23.040 - 23.156: 95.6154% ( 16) 00:43:35.767 23.156 - 23.273: 95.7869% ( 14) 00:43:35.767 23.273 - 23.389: 95.8726% ( 7) 00:43:35.767 23.389 - 23.505: 95.9339% ( 5) 00:43:35.767 23.505 - 23.622: 96.0196% ( 7) 00:43:35.767 23.622 - 23.738: 96.3258% ( 25) 00:43:35.767 23.738 - 23.855: 96.6565% ( 27) 00:43:35.767 23.855 - 23.971: 97.4280% ( 63) 00:43:35.768 23.971 - 24.087: 98.1751% ( 61) 00:43:35.768 24.087 - 24.204: 98.6405% ( 38) 00:43:35.768 24.204 - 24.320: 98.9590% ( 26) 00:43:35.768 24.320 - 24.436: 99.1427% ( 15) 00:43:35.768 24.436 - 24.553: 99.2407% ( 8) 00:43:35.768 24.553 - 24.669: 99.3019% ( 5) 00:43:35.768 24.669 - 24.785: 99.3386% ( 3) 00:43:35.768 24.785 - 24.902: 99.3754% ( 3) 00:43:35.768 24.902 - 25.018: 99.3999% ( 2) 00:43:35.768 25.018 - 25.135: 99.4121% ( 1) 00:43:35.768 25.135 - 25.251: 99.4366% ( 2) 00:43:35.768 25.251 - 25.367: 99.4489% ( 1) 00:43:35.768 25.367 - 25.484: 99.4611% ( 1) 00:43:35.768 25.484 - 25.600: 99.4856% ( 2) 00:43:35.768 25.833 - 25.949: 99.4979% ( 1) 00:43:35.768 26.182 - 26.298: 99.5101% ( 1) 00:43:35.768 26.298 - 26.415: 99.5224% ( 1) 00:43:35.768 26.996 - 27.113: 99.5346% ( 1) 00:43:35.768 27.229 - 27.345: 99.5591% ( 2) 00:43:35.768 27.462 - 27.578: 99.5713% ( 1) 00:43:35.768 28.276 - 28.393: 99.5836% ( 1) 00:43:35.768 28.975 - 29.091: 99.5958% ( 1) 00:43:35.768 29.324 - 29.440: 99.6081% ( 1) 00:43:35.768 29.440 - 29.556: 99.6203% ( 1) 00:43:35.768 29.673 - 29.789: 99.6326% ( 1) 00:43:35.768 29.789 - 30.022: 99.6571% ( 2) 00:43:35.768 30.255 - 30.487: 99.6693% ( 1) 00:43:35.768 30.487 - 30.720: 99.6816% ( 1) 00:43:35.768 30.720 - 30.953: 99.7061% ( 2) 00:43:35.768 30.953 - 31.185: 99.7428% ( 3) 00:43:35.768 31.651 - 31.884: 99.7673% ( 2) 00:43:35.768 32.116 - 32.349: 99.7795% ( 1) 00:43:35.768 32.815 - 33.047: 99.7918% ( 1) 00:43:35.768 33.047 - 33.280: 99.8040% ( 1) 00:43:35.768 33.745 - 33.978: 99.8163% ( 1) 00:43:35.768 34.444 - 34.676: 99.8285% ( 1) 00:43:35.768 34.676 - 34.909: 99.8408% ( 1) 00:43:35.768 35.607 - 35.840: 99.8653% ( 2) 00:43:35.768 35.840 - 36.073: 99.8775% ( 1) 00:43:35.768 36.538 - 36.771: 99.8898% ( 1) 00:43:35.768 37.935 - 38.167: 99.9020% ( 1) 00:43:35.768 41.891 - 42.124: 99.9143% ( 1) 00:43:35.768 44.218 - 44.451: 99.9388% ( 2) 00:43:35.768 45.615 - 45.847: 99.9510% ( 1) 00:43:35.768 48.175 - 48.407: 99.9633% ( 1) 00:43:35.768 59.578 - 60.044: 99.9755% ( 1) 00:43:35.768 60.044 - 60.509: 99.9878% ( 1) 00:43:35.768 76.335 - 76.800: 100.0000% ( 1) 00:43:35.768 00:43:35.768 00:43:35.768 real 0m1.273s 00:43:35.768 user 0m1.112s 00:43:35.768 sys 0m0.077s 00:43:35.768 11:55:10 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:35.768 11:55:10 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:43:35.768 ************************************ 00:43:35.768 END TEST nvme_overhead 00:43:35.768 ************************************ 00:43:35.768 11:55:10 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:35.768 11:55:10 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:43:35.768 11:55:10 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:43:35.768 11:55:10 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:35.768 11:55:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:35.768 ************************************ 00:43:35.768 START TEST nvme_arbitration 00:43:35.768 ************************************ 00:43:35.768 11:55:10 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:43:39.088 Initializing NVMe Controllers 00:43:39.088 Attached to 0000:00:10.0 00:43:39.088 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:43:39.088 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:43:39.088 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:43:39.088 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:43:39.088 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:43:39.088 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:43:39.088 Initialization complete. Launching workers. 00:43:39.088 Starting thread on core 1 with urgent priority queue 00:43:39.088 Starting thread on core 2 with urgent priority queue 00:43:39.088 Starting thread on core 3 with urgent priority queue 00:43:39.088 Starting thread on core 0 with urgent priority queue 00:43:39.088 QEMU NVMe Ctrl (12340 ) core 0: 1920.00 IO/s 52.08 secs/100000 ios 00:43:39.088 QEMU NVMe Ctrl (12340 ) core 1: 938.67 IO/s 106.53 secs/100000 ios 00:43:39.088 QEMU NVMe Ctrl (12340 ) core 2: 362.67 IO/s 275.74 secs/100000 ios 00:43:39.088 QEMU NVMe Ctrl (12340 ) core 3: 1173.33 IO/s 85.23 secs/100000 ios 00:43:39.088 ======================================================== 00:43:39.088 00:43:39.088 00:43:39.088 real 0m3.376s 00:43:39.088 user 0m9.241s 00:43:39.088 sys 0m0.144s 00:43:39.088 11:55:13 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:39.088 ************************************ 00:43:39.088 END TEST nvme_arbitration 00:43:39.088 ************************************ 00:43:39.088 11:55:13 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:43:39.345 11:55:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:39.345 11:55:13 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:43:39.345 11:55:13 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:43:39.345 11:55:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:39.345 11:55:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:39.345 ************************************ 00:43:39.345 START TEST nvme_single_aen 00:43:39.345 ************************************ 00:43:39.346 11:55:13 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:43:39.605 Asynchronous Event Request test 00:43:39.605 Attached to 0000:00:10.0 00:43:39.605 Reset controller to setup AER completions for this process 00:43:39.605 Registering asynchronous event callbacks... 00:43:39.605 Getting orig temperature thresholds of all controllers 00:43:39.605 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:43:39.605 Setting all controllers temperature threshold low to trigger AER 00:43:39.605 Waiting for all controllers temperature threshold to be set lower 00:43:39.605 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:43:39.605 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:43:39.605 Waiting for all controllers to trigger AER and reset threshold 00:43:39.605 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:43:39.605 Cleaning up... 00:43:39.605 00:43:39.605 real 0m0.312s 00:43:39.605 user 0m0.111s 00:43:39.605 sys 0m0.141s 00:43:39.605 11:55:14 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:39.605 ************************************ 00:43:39.605 END TEST nvme_single_aen 00:43:39.605 ************************************ 00:43:39.605 11:55:14 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:43:39.605 11:55:14 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:39.605 11:55:14 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:43:39.605 11:55:14 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:39.605 11:55:14 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:39.605 11:55:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:39.605 ************************************ 00:43:39.605 START TEST nvme_doorbell_aers 00:43:39.605 ************************************ 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:43:39.605 11:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:43:39.606 11:55:14 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:43:39.606 11:55:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:43:39.606 11:55:14 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:43:39.863 [2024-07-13 11:55:14.587731] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173238) is not found. Dropping the request. 00:43:49.829 Executing: test_write_invalid_db 00:43:49.829 Waiting for AER completion... 00:43:49.829 Failure: test_write_invalid_db 00:43:49.829 00:43:49.829 Executing: test_invalid_db_write_overflow_sq 00:43:49.829 Waiting for AER completion... 00:43:49.829 Failure: test_invalid_db_write_overflow_sq 00:43:49.829 00:43:49.829 Executing: test_invalid_db_write_overflow_cq 00:43:49.829 Waiting for AER completion... 00:43:49.829 Failure: test_invalid_db_write_overflow_cq 00:43:49.829 00:43:49.829 00:43:49.829 real 0m10.114s 00:43:49.829 user 0m8.626s 00:43:49.829 sys 0m1.432s 00:43:49.829 11:55:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:49.829 11:55:24 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:43:49.829 ************************************ 00:43:49.829 END TEST nvme_doorbell_aers 00:43:49.829 ************************************ 00:43:49.829 11:55:24 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:49.829 11:55:24 nvme -- nvme/nvme.sh@97 -- # uname 00:43:49.829 11:55:24 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:43:49.829 11:55:24 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:43:49.829 11:55:24 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:43:49.829 11:55:24 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:49.829 11:55:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:49.829 ************************************ 00:43:49.829 START TEST nvme_multi_aen 00:43:49.829 ************************************ 00:43:49.829 11:55:24 nvme.nvme_multi_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:43:50.088 [2024-07-13 11:55:24.649985] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173238) is not found. Dropping the request. 00:43:50.088 [2024-07-13 11:55:24.650141] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173238) is not found. Dropping the request. 00:43:50.088 [2024-07-13 11:55:24.650178] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173238) is not found. Dropping the request. 00:43:50.088 Child process pid: 173438 00:43:50.346 [Child] Asynchronous Event Request test 00:43:50.346 [Child] Attached to 0000:00:10.0 00:43:50.346 [Child] Registering asynchronous event callbacks... 00:43:50.346 [Child] Getting orig temperature thresholds of all controllers 00:43:50.346 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:43:50.346 [Child] Waiting for all controllers to trigger AER and reset threshold 00:43:50.346 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:43:50.346 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:43:50.346 [Child] Cleaning up... 00:43:50.346 Asynchronous Event Request test 00:43:50.346 Attached to 0000:00:10.0 00:43:50.346 Reset controller to setup AER completions for this process 00:43:50.346 Registering asynchronous event callbacks... 00:43:50.346 Getting orig temperature thresholds of all controllers 00:43:50.346 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:43:50.346 Setting all controllers temperature threshold low to trigger AER 00:43:50.346 Waiting for all controllers temperature threshold to be set lower 00:43:50.346 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:43:50.347 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:43:50.347 Waiting for all controllers to trigger AER and reset threshold 00:43:50.347 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:43:50.347 Cleaning up... 00:43:50.347 00:43:50.347 real 0m0.645s 00:43:50.347 user 0m0.214s 00:43:50.347 sys 0m0.246s 00:43:50.347 11:55:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:50.347 11:55:25 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:43:50.347 ************************************ 00:43:50.347 END TEST nvme_multi_aen 00:43:50.347 ************************************ 00:43:50.347 11:55:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:50.347 11:55:25 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:43:50.347 11:55:25 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:43:50.347 11:55:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:50.347 11:55:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:50.347 ************************************ 00:43:50.347 START TEST nvme_startup 00:43:50.347 ************************************ 00:43:50.347 11:55:25 nvme.nvme_startup -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:43:50.914 Initializing NVMe Controllers 00:43:50.914 Attached to 0000:00:10.0 00:43:50.914 Initialization complete. 00:43:50.914 Time used:191763.578 (us). 00:43:50.914 00:43:50.914 real 0m0.290s 00:43:50.914 user 0m0.108s 00:43:50.914 sys 0m0.117s 00:43:50.914 11:55:25 nvme.nvme_startup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:43:50.914 11:55:25 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:43:50.914 ************************************ 00:43:50.914 END TEST nvme_startup 00:43:50.914 ************************************ 00:43:50.914 11:55:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:43:50.914 11:55:25 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:43:50.914 11:55:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:43:50.914 11:55:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:43:50.914 11:55:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:43:50.914 ************************************ 00:43:50.914 START TEST nvme_multi_secondary 00:43:50.914 ************************************ 00:43:50.914 11:55:25 nvme.nvme_multi_secondary -- common/autotest_common.sh@1123 -- # nvme_multi_secondary 00:43:50.914 11:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=173497 00:43:50.914 11:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:43:50.914 11:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=173498 00:43:50.914 11:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:43:50.914 11:55:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:43:54.200 Initializing NVMe Controllers 00:43:54.200 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:54.200 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:43:54.200 Initialization complete. Launching workers. 00:43:54.200 ======================================================== 00:43:54.200 Latency(us) 00:43:54.200 Device Information : IOPS MiB/s Average min max 00:43:54.200 PCIE (0000:00:10.0) NSID 1 from core 2: 14064.66 54.94 1137.27 148.37 17102.51 00:43:54.200 ======================================================== 00:43:54.200 Total : 14064.66 54.94 1137.27 148.37 17102.51 00:43:54.200 00:43:54.200 11:55:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 173497 00:43:54.459 Initializing NVMe Controllers 00:43:54.459 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:54.459 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:43:54.459 Initialization complete. Launching workers. 00:43:54.459 ======================================================== 00:43:54.459 Latency(us) 00:43:54.459 Device Information : IOPS MiB/s Average min max 00:43:54.459 PCIE (0000:00:10.0) NSID 1 from core 1: 32493.00 126.93 492.08 108.26 1736.56 00:43:54.459 ======================================================== 00:43:54.459 Total : 32493.00 126.93 492.08 108.26 1736.56 00:43:54.459 00:43:56.363 Initializing NVMe Controllers 00:43:56.363 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:56.363 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:56.363 Initialization complete. Launching workers. 00:43:56.363 ======================================================== 00:43:56.364 Latency(us) 00:43:56.364 Device Information : IOPS MiB/s Average min max 00:43:56.364 PCIE (0000:00:10.0) NSID 1 from core 0: 39403.60 153.92 405.73 112.09 2285.51 00:43:56.364 ======================================================== 00:43:56.364 Total : 39403.60 153.92 405.73 112.09 2285.51 00:43:56.364 00:43:56.364 11:55:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 173498 00:43:56.364 11:55:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=173570 00:43:56.364 11:55:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:43:56.364 11:55:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=173571 00:43:56.364 11:55:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:43:56.364 11:55:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:43:59.671 Initializing NVMe Controllers 00:43:59.671 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:59.671 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:43:59.671 Initialization complete. Launching workers. 00:43:59.671 ======================================================== 00:43:59.671 Latency(us) 00:43:59.671 Device Information : IOPS MiB/s Average min max 00:43:59.671 PCIE (0000:00:10.0) NSID 1 from core 1: 33379.35 130.39 479.00 107.27 2410.70 00:43:59.671 ======================================================== 00:43:59.671 Total : 33379.35 130.39 479.00 107.27 2410.70 00:43:59.671 00:43:59.671 Initializing NVMe Controllers 00:43:59.671 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:43:59.671 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:43:59.671 Initialization complete. Launching workers. 00:43:59.671 ======================================================== 00:43:59.671 Latency(us) 00:43:59.671 Device Information : IOPS MiB/s Average min max 00:43:59.671 PCIE (0000:00:10.0) NSID 1 from core 0: 33528.63 130.97 476.89 133.72 3342.49 00:43:59.671 ======================================================== 00:43:59.671 Total : 33528.63 130.97 476.89 133.72 3342.49 00:43:59.671 00:44:01.615 Initializing NVMe Controllers 00:44:01.615 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:44:01.615 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:44:01.615 Initialization complete. Launching workers. 00:44:01.615 ======================================================== 00:44:01.615 Latency(us) 00:44:01.615 Device Information : IOPS MiB/s Average min max 00:44:01.615 PCIE (0000:00:10.0) NSID 1 from core 2: 17699.99 69.14 903.30 135.06 20659.56 00:44:01.615 ======================================================== 00:44:01.615 Total : 17699.99 69.14 903.30 135.06 20659.56 00:44:01.615 00:44:01.615 11:55:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 173570 00:44:01.615 11:55:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 173571 00:44:01.615 00:44:01.615 real 0m10.918s 00:44:01.615 user 0m18.605s 00:44:01.615 sys 0m0.830s 00:44:01.615 ************************************ 00:44:01.615 11:55:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:01.615 11:55:36 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:44:01.615 END TEST nvme_multi_secondary 00:44:01.615 ************************************ 00:44:01.887 11:55:36 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:01.887 11:55:36 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:44:01.887 11:55:36 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:44:01.887 11:55:36 nvme -- common/autotest_common.sh@1087 -- # [[ -e /proc/172793 ]] 00:44:01.887 11:55:36 nvme -- common/autotest_common.sh@1088 -- # kill 172793 00:44:01.887 11:55:36 nvme -- common/autotest_common.sh@1089 -- # wait 172793 00:44:01.887 [2024-07-13 11:55:36.385111] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173437) is not found. Dropping the request. 00:44:01.887 [2024-07-13 11:55:36.385339] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173437) is not found. Dropping the request. 00:44:01.887 [2024-07-13 11:55:36.385397] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173437) is not found. Dropping the request. 00:44:01.887 [2024-07-13 11:55:36.385485] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 173437) is not found. Dropping the request. 00:44:01.887 11:55:36 nvme -- common/autotest_common.sh@1091 -- # rm -f /var/run/spdk_stub0 00:44:01.887 11:55:36 nvme -- common/autotest_common.sh@1095 -- # echo 2 00:44:01.887 11:55:36 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:44:01.887 11:55:36 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:01.887 11:55:36 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:01.887 11:55:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:02.146 ************************************ 00:44:02.146 START TEST bdev_nvme_reset_stuck_adm_cmd 00:44:02.146 ************************************ 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:44:02.146 * Looking for test storage... 00:44:02.146 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=173731 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 173731 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 173731 ']' 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:02.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:02.146 11:55:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:02.146 [2024-07-13 11:55:36.853539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:44:02.146 [2024-07-13 11:55:36.853710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173731 ] 00:44:02.405 [2024-07-13 11:55:37.045994] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:02.663 [2024-07-13 11:55:37.298490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:02.663 [2024-07-13 11:55:37.298638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:44:02.663 [2024-07-13 11:55:37.298736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:02.663 [2024-07-13 11:55:37.298736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:03.598 nvme0n1 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_byGh0.txt 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:03.598 true 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720871738 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=173766 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:44:03.598 11:55:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:05.496 [2024-07-13 11:55:40.146884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:44:05.496 [2024-07-13 11:55:40.147836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:44:05.496 [2024-07-13 11:55:40.148001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:44:05.496 [2024-07-13 11:55:40.148143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:44:05.496 [2024-07-13 11:55:40.150362] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:44:05.496 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 173766 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 173766 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 173766 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_byGh0.txt 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:44:05.496 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_byGh0.txt 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 173731 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 173731 ']' 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 173731 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 173731 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:05.756 killing process with pid 173731 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 173731' 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 173731 00:44:05.756 11:55:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 173731 00:44:07.661 11:55:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:44:07.661 11:55:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:44:07.661 00:44:07.661 real 0m5.618s 00:44:07.661 user 0m19.530s 00:44:07.661 sys 0m0.644s 00:44:07.661 11:55:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:07.661 11:55:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:44:07.661 ************************************ 00:44:07.661 END TEST bdev_nvme_reset_stuck_adm_cmd 00:44:07.661 ************************************ 00:44:07.661 11:55:42 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:07.661 11:55:42 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:44:07.661 11:55:42 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:44:07.661 11:55:42 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:07.661 11:55:42 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:07.661 11:55:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:07.661 ************************************ 00:44:07.661 START TEST nvme_fio 00:44:07.661 ************************************ 00:44:07.661 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:44:07.661 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:44:07.661 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:44:07.661 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:44:07.661 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:44:07.661 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:44:07.661 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:44:07.661 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:07.661 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:07.661 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:44:07.661 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:44:07.661 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:44:07.661 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:44:07.661 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:44:07.661 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:44:07.661 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:44:07.919 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:44:07.919 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:44:08.177 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:44:08.177 11:55:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:44:08.177 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:44:08.435 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:44:08.435 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:44:08.435 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:44:08.435 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:44:08.435 11:55:42 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:44:08.435 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:44:08.435 fio-3.35 00:44:08.435 Starting 1 thread 00:44:11.720 00:44:11.720 test: (groupid=0, jobs=1): err= 0: pid=173933: Sat Jul 13 11:55:45 2024 00:44:11.720 read: IOPS=15.0k, BW=58.5MiB/s (61.3MB/s)(117MiB/2001msec) 00:44:11.720 slat (nsec): min=4032, max=85425, avg=6652.95, stdev=4968.66 00:44:11.720 clat (usec): min=363, max=9625, avg=4250.65, stdev=430.30 00:44:11.720 lat (usec): min=368, max=9711, avg=4257.31, stdev=430.75 00:44:11.720 clat percentiles (usec): 00:44:11.720 | 1.00th=[ 3294], 5.00th=[ 3490], 10.00th=[ 3687], 20.00th=[ 3982], 00:44:11.720 | 30.00th=[ 4113], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:44:11.720 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4817], 00:44:11.720 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7177], 99.95th=[ 8586], 00:44:11.720 | 99.99th=[ 9634] 00:44:11.720 bw ( KiB/s): min=57824, max=58552, per=97.17%, avg=58184.00, stdev=364.07, samples=3 00:44:11.720 iops : min=14456, max=14638, avg=14546.00, stdev=91.02, samples=3 00:44:11.720 write: IOPS=15.0k, BW=58.5MiB/s (61.3MB/s)(117MiB/2001msec); 0 zone resets 00:44:11.720 slat (nsec): min=4116, max=65701, avg=7009.17, stdev=5115.17 00:44:11.720 clat (usec): min=208, max=9531, avg=4266.33, stdev=434.69 00:44:11.720 lat (usec): min=228, max=9563, avg=4273.34, stdev=435.05 00:44:11.720 clat percentiles (usec): 00:44:11.720 | 1.00th=[ 3326], 5.00th=[ 3523], 10.00th=[ 3720], 20.00th=[ 4015], 00:44:11.720 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:44:11.720 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4817], 00:44:11.720 | 99.00th=[ 5080], 99.50th=[ 5800], 99.90th=[ 7439], 99.95th=[ 8717], 00:44:11.720 | 99.99th=[ 9503] 00:44:11.720 bw ( KiB/s): min=57296, max=58872, per=96.91%, avg=58050.67, stdev=790.11, samples=3 00:44:11.720 iops : min=14324, max=14718, avg=14512.67, stdev=197.53, samples=3 00:44:11.720 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:44:11.720 lat (msec) : 2=0.06%, 4=19.81%, 10=80.09% 00:44:11.720 cpu : usr=100.00%, sys=0.00%, ctx=2, majf=0, minf=37 00:44:11.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:44:11.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:11.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:44:11.720 issued rwts: total=29954,29965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:11.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:44:11.720 00:44:11.720 Run status group 0 (all jobs): 00:44:11.720 READ: bw=58.5MiB/s (61.3MB/s), 58.5MiB/s-58.5MiB/s (61.3MB/s-61.3MB/s), io=117MiB (123MB), run=2001-2001msec 00:44:11.720 WRITE: bw=58.5MiB/s (61.3MB/s), 58.5MiB/s-58.5MiB/s (61.3MB/s-61.3MB/s), io=117MiB (123MB), run=2001-2001msec 00:44:11.720 ----------------------------------------------------- 00:44:11.720 Suppressions used: 00:44:11.720 count bytes template 00:44:11.720 1 32 /usr/src/fio/parse.c 00:44:11.720 ----------------------------------------------------- 00:44:11.720 00:44:11.720 11:55:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:44:11.720 11:55:46 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:44:11.720 00:44:11.720 real 0m4.043s 00:44:11.720 user 0m3.336s 00:44:11.720 sys 0m0.386s 00:44:11.720 11:55:46 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:11.720 ************************************ 00:44:11.720 END TEST nvme_fio 00:44:11.720 ************************************ 00:44:11.720 11:55:46 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:44:11.720 11:55:46 nvme -- common/autotest_common.sh@1142 -- # return 0 00:44:11.720 00:44:11.720 real 0m46.469s 00:44:11.720 user 2m4.585s 00:44:11.720 sys 0m8.045s 00:44:11.720 11:55:46 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:11.720 11:55:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:44:11.720 ************************************ 00:44:11.720 END TEST nvme 00:44:11.720 ************************************ 00:44:11.720 11:55:46 -- common/autotest_common.sh@1142 -- # return 0 00:44:11.720 11:55:46 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:44:11.720 11:55:46 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:44:11.720 11:55:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:11.720 11:55:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:11.720 11:55:46 -- common/autotest_common.sh@10 -- # set +x 00:44:11.720 ************************************ 00:44:11.720 START TEST nvme_scc 00:44:11.720 ************************************ 00:44:11.720 11:55:46 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:44:11.979 * Looking for test storage... 00:44:11.980 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:11.980 11:55:46 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:11.980 11:55:46 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:11.980 11:55:46 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:11.980 11:55:46 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:11.980 11:55:46 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:11.980 11:55:46 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:11.980 11:55:46 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:11.980 11:55:46 nvme_scc -- paths/export.sh@5 -- # export PATH 00:44:11.980 11:55:46 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:44:11.980 11:55:46 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:44:11.980 11:55:46 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:11.980 11:55:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:44:11.980 11:55:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:44:11.980 11:55:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:44:11.980 11:55:46 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:12.239 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:12.239 Waiting for block devices as requested 00:44:12.239 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:12.239 11:55:46 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:44:12.239 11:55:46 nvme_scc -- scripts/common.sh@15 -- # local i 00:44:12.239 11:55:46 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:44:12.239 11:55:46 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:44:12.239 11:55:46 nvme_scc -- scripts/common.sh@24 -- # return 0 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@18 -- # shift 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.239 11:55:46 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.500 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.501 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:46 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.502 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.503 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:44:12.504 11:55:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:44:12.504 11:55:47 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:44:12.505 11:55:47 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:44:12.505 11:55:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:44:12.505 11:55:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:44:12.505 11:55:47 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:12.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:13.022 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:13.957 11:55:48 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:44:13.957 11:55:48 nvme_scc -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:44:13.957 11:55:48 nvme_scc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:13.957 11:55:48 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:44:13.957 ************************************ 00:44:13.957 START TEST nvme_simple_copy 00:44:13.957 ************************************ 00:44:13.958 11:55:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:44:14.216 Initializing NVMe Controllers 00:44:14.216 Attaching to 0000:00:10.0 00:44:14.216 Controller supports SCC. Attached to 0000:00:10.0 00:44:14.216 Namespace ID: 1 size: 5GB 00:44:14.216 Initialization complete. 00:44:14.216 00:44:14.216 Controller QEMU NVMe Ctrl (12340 ) 00:44:14.216 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:44:14.216 Namespace Block Size:4096 00:44:14.216 Writing LBAs 0 to 63 with Random Data 00:44:14.216 Copied LBAs from 0 - 63 to the Destination LBA 256 00:44:14.216 LBAs matching Written Data: 64 00:44:14.216 00:44:14.216 real 0m0.329s 00:44:14.216 user 0m0.137s 00:44:14.216 sys 0m0.095s 00:44:14.216 11:55:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:14.216 11:55:48 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:44:14.216 ************************************ 00:44:14.216 END TEST nvme_simple_copy 00:44:14.216 ************************************ 00:44:14.475 11:55:48 nvme_scc -- common/autotest_common.sh@1142 -- # return 0 00:44:14.475 ************************************ 00:44:14.475 END TEST nvme_scc 00:44:14.475 ************************************ 00:44:14.475 00:44:14.475 real 0m2.537s 00:44:14.475 user 0m0.737s 00:44:14.475 sys 0m1.635s 00:44:14.475 11:55:48 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:14.475 11:55:48 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:44:14.475 11:55:49 -- common/autotest_common.sh@1142 -- # return 0 00:44:14.475 11:55:49 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:44:14.475 11:55:49 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:44:14.475 11:55:49 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:44:14.475 11:55:49 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:44:14.475 11:55:49 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:44:14.475 11:55:49 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:44:14.475 11:55:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:14.475 11:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:14.475 11:55:49 -- common/autotest_common.sh@10 -- # set +x 00:44:14.475 ************************************ 00:44:14.475 START TEST nvme_rpc 00:44:14.475 ************************************ 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:44:14.475 * Looking for test storage... 00:44:14.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:14.475 11:55:49 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:14.475 11:55:49 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:44:14.475 11:55:49 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:44:14.475 11:55:49 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=174412 00:44:14.475 11:55:49 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:44:14.475 11:55:49 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:44:14.475 11:55:49 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 174412 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 174412 ']' 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:14.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:14.475 11:55:49 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:14.734 [2024-07-13 11:55:49.277197] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:44:14.734 [2024-07-13 11:55:49.277406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174412 ] 00:44:14.734 [2024-07-13 11:55:49.456692] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:14.992 [2024-07-13 11:55:49.632990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:14.993 [2024-07-13 11:55:49.632989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:15.559 11:55:50 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:15.559 11:55:50 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:44:15.559 11:55:50 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:44:16.127 Nvme0n1 00:44:16.127 11:55:50 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:44:16.127 11:55:50 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:44:16.127 request: 00:44:16.127 { 00:44:16.127 "bdev_name": "Nvme0n1", 00:44:16.127 "filename": "non_existing_file", 00:44:16.127 "method": "bdev_nvme_apply_firmware", 00:44:16.127 "req_id": 1 00:44:16.127 } 00:44:16.127 Got JSON-RPC error response 00:44:16.127 response: 00:44:16.127 { 00:44:16.127 "code": -32603, 00:44:16.127 "message": "open file failed." 00:44:16.127 } 00:44:16.127 11:55:50 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:44:16.127 11:55:50 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:44:16.127 11:55:50 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:44:16.386 11:55:50 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:44:16.386 11:55:50 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 174412 00:44:16.386 11:55:50 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 174412 ']' 00:44:16.386 11:55:50 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 174412 00:44:16.386 11:55:50 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:44:16.386 11:55:50 nvme_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:16.386 11:55:50 nvme_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174412 00:44:16.386 11:55:51 nvme_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:16.386 11:55:51 nvme_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:16.386 11:55:51 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174412' 00:44:16.386 killing process with pid 174412 00:44:16.386 11:55:51 nvme_rpc -- common/autotest_common.sh@967 -- # kill 174412 00:44:16.386 11:55:51 nvme_rpc -- common/autotest_common.sh@972 -- # wait 174412 00:44:18.288 00:44:18.288 real 0m3.661s 00:44:18.288 user 0m6.955s 00:44:18.288 sys 0m0.547s 00:44:18.288 11:55:52 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:18.288 ************************************ 00:44:18.288 END TEST nvme_rpc 00:44:18.288 ************************************ 00:44:18.288 11:55:52 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:44:18.288 11:55:52 -- common/autotest_common.sh@1142 -- # return 0 00:44:18.288 11:55:52 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:44:18.288 11:55:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:18.288 11:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:18.288 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:44:18.288 ************************************ 00:44:18.288 START TEST nvme_rpc_timeouts 00:44:18.288 ************************************ 00:44:18.288 11:55:52 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:44:18.288 * Looking for test storage... 00:44:18.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:18.288 11:55:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:44:18.288 11:55:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_174511 00:44:18.288 11:55:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_174511 00:44:18.288 11:55:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=174540 00:44:18.288 11:55:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:44:18.288 11:55:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 174540 00:44:18.288 11:55:52 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 174540 ']' 00:44:18.288 11:55:52 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:18.288 11:55:52 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:18.288 11:55:52 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:18.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:18.288 11:55:52 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:18.288 11:55:52 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:44:18.288 11:55:52 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:44:18.288 [2024-07-13 11:55:52.903944] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:44:18.288 [2024-07-13 11:55:52.904314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174540 ] 00:44:18.546 [2024-07-13 11:55:53.063553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:18.546 [2024-07-13 11:55:53.249515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:18.546 [2024-07-13 11:55:53.249513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:19.481 11:55:53 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:19.481 11:55:53 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:44:19.481 11:55:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:44:19.481 Checking default timeout settings: 00:44:19.481 11:55:53 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:44:19.739 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:44:19.739 Making settings changes with rpc: 00:44:19.739 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:44:19.996 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:44:19.996 Check default vs. modified settings: 00:44:19.996 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_174511 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_174511 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:44:20.254 Setting action_on_timeout is changed as expected. 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_174511 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_174511 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:44:20.254 Setting timeout_us is changed as expected. 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_174511 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_174511 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:44:20.254 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:44:20.255 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:44:20.255 Setting timeout_admin_us is changed as expected. 00:44:20.255 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:44:20.255 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_174511 /tmp/settings_modified_174511 00:44:20.255 11:55:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 174540 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 174540 ']' 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 174540 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 174540 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 174540' 00:44:20.255 killing process with pid 174540 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 174540 00:44:20.255 11:55:54 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 174540 00:44:22.154 RPC TIMEOUT SETTING TEST PASSED. 00:44:22.154 11:55:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:44:22.412 00:44:22.412 real 0m4.149s 00:44:22.412 user 0m7.956s 00:44:22.412 sys 0m0.638s 00:44:22.412 ************************************ 00:44:22.412 END TEST nvme_rpc_timeouts 00:44:22.412 11:55:56 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:44:22.412 11:55:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:44:22.412 ************************************ 00:44:22.412 11:55:56 -- common/autotest_common.sh@1142 -- # return 0 00:44:22.412 11:55:56 -- spdk/autotest.sh@243 -- # uname -s 00:44:22.412 11:55:56 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:44:22.412 11:55:56 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:44:22.412 11:55:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:44:22.412 11:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:44:22.412 11:55:56 -- common/autotest_common.sh@10 -- # set +x 00:44:22.412 ************************************ 00:44:22.412 START TEST sw_hotplug 00:44:22.412 ************************************ 00:44:22.412 11:55:56 sw_hotplug -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:44:22.412 * Looking for test storage... 00:44:22.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:44:22.412 11:55:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:22.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:22.670 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:44:24.044 11:55:58 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:44:24.044 11:55:58 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:44:24.044 11:55:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:44:24.044 11:55:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@230 -- # local class 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:44:24.044 11:55:58 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@15 -- # local i 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:44:24.045 11:55:58 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:44:24.045 11:55:58 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1 00:44:24.045 11:55:58 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:44:24.045 11:55:58 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:44:24.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:24.303 Waiting for block devices as requested 00:44:24.561 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:44:24.561 11:55:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0 00:44:24.561 11:55:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:44:24.819 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:44:24.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:44:25.077 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:44:26.014 11:56:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=175118 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:44:26.014 11:56:00 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:44:26.014 11:56:00 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:44:26.014 11:56:00 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:44:26.014 11:56:00 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:44:26.014 11:56:00 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 false 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:44:26.014 11:56:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:44:26.272 Initializing NVMe Controllers 00:44:26.272 Attaching to 0000:00:10.0 00:44:26.272 Attached to 0000:00:10.0 00:44:26.272 Initialization complete. Starting I/O... 00:44:26.272 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:44:26.272 00:44:27.207 QEMU NVMe Ctrl (12340 ): 2404 I/Os completed (+2404) 00:44:27.207 00:44:28.621 QEMU NVMe Ctrl (12340 ): 5364 I/Os completed (+2960) 00:44:28.621 00:44:29.555 QEMU NVMe Ctrl (12340 ): 9080 I/Os completed (+3716) 00:44:29.555 00:44:30.516 QEMU NVMe Ctrl (12340 ): 12736 I/Os completed (+3656) 00:44:30.516 00:44:31.452 QEMU NVMe Ctrl (12340 ): 16372 I/Os completed (+3636) 00:44:31.452 00:44:32.020 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:44:32.020 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:44:32.020 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:44:32.020 [2024-07-13 11:56:06.714839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:44:32.020 Controller removed: QEMU NVMe Ctrl (12340 ) 00:44:32.020 [2024-07-13 11:56:06.716415] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:32.020 [2024-07-13 11:56:06.716481] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:32.020 [2024-07-13 11:56:06.716513] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:32.020 [2024-07-13 11:56:06.716538] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:32.020 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:44:32.020 [2024-07-13 11:56:06.721915] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:32.020 [2024-07-13 11:56:06.721967] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:32.020 [2024-07-13 11:56:06.721991] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:32.020 [2024-07-13 11:56:06.722024] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:32.020 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:44:32.020 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:44:32.020 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:44:32.020 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:44:32.020 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:44:32.278 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:44:32.278 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:44:32.279 11:56:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:44:32.279 Attaching to 0000:00:10.0 00:44:32.279 Attached to 0000:00:10.0 00:44:32.279 QEMU NVMe Ctrl (12340 ): 121 I/Os completed (+121) 00:44:32.279 00:44:33.214 QEMU NVMe Ctrl (12340 ): 3345 I/Os completed (+3224) 00:44:33.214 00:44:34.592 QEMU NVMe Ctrl (12340 ): 6429 I/Os completed (+3084) 00:44:34.592 00:44:35.528 QEMU NVMe Ctrl (12340 ): 9849 I/Os completed (+3420) 00:44:35.528 00:44:36.464 QEMU NVMe Ctrl (12340 ): 13343 I/Os completed (+3494) 00:44:36.464 00:44:37.400 QEMU NVMe Ctrl (12340 ): 16692 I/Os completed (+3349) 00:44:37.400 00:44:38.337 11:56:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:44:38.337 11:56:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:44:38.337 11:56:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:44:38.337 11:56:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:44:38.337 [2024-07-13 11:56:12.898071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:44:38.337 Controller removed: QEMU NVMe Ctrl (12340 ) 00:44:38.337 [2024-07-13 11:56:12.899361] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:38.337 [2024-07-13 11:56:12.899433] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:38.337 [2024-07-13 11:56:12.899459] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:38.337 [2024-07-13 11:56:12.899482] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:38.337 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:44:38.337 [2024-07-13 11:56:12.904307] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:38.337 [2024-07-13 11:56:12.904352] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:38.337 [2024-07-13 11:56:12.904373] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:38.337 [2024-07-13 11:56:12.904390] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:38.337 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/device 00:44:38.337 EAL: Scan for (pci) bus failed. 00:44:38.337 11:56:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:44:38.337 11:56:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:44:38.337 00:44:38.337 11:56:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:44:38.337 11:56:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:44:38.337 11:56:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:44:38.337 11:56:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:44:38.337 11:56:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:44:38.337 11:56:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:44:38.337 Attaching to 0000:00:10.0 00:44:38.337 Attached to 0000:00:10.0 00:44:39.271 QEMU NVMe Ctrl (12340 ): 3105 I/Os completed (+3105) 00:44:39.271 00:44:40.206 QEMU NVMe Ctrl (12340 ): 6735 I/Os completed (+3630) 00:44:40.206 00:44:41.582 QEMU NVMe Ctrl (12340 ): 10319 I/Os completed (+3584) 00:44:41.582 00:44:42.517 QEMU NVMe Ctrl (12340 ): 13927 I/Os completed (+3608) 00:44:42.517 00:44:43.453 QEMU NVMe Ctrl (12340 ): 17543 I/Os completed (+3616) 00:44:43.453 00:44:44.389 QEMU NVMe Ctrl (12340 ): 21199 I/Os completed (+3656) 00:44:44.389 00:44:44.389 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:44:44.389 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:44:44.389 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:44:44.389 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:44:44.389 [2024-07-13 11:56:19.079268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:44:44.389 Controller removed: QEMU NVMe Ctrl (12340 ) 00:44:44.389 [2024-07-13 11:56:19.080749] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:44.389 [2024-07-13 11:56:19.080920] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:44.389 [2024-07-13 11:56:19.081050] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:44.389 [2024-07-13 11:56:19.081101] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:44.389 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:44:44.389 [2024-07-13 11:56:19.086046] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:44.389 [2024-07-13 11:56:19.086200] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:44.389 [2024-07-13 11:56:19.086252] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:44.389 [2024-07-13 11:56:19.086361] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:44:44.389 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:44:44.389 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:44:44.389 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:44:44.389 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:44:44.389 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:44:44.647 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:44:44.647 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:44:44.647 11:56:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:44:44.647 Attaching to 0000:00:10.0 00:44:44.647 Attached to 0000:00:10.0 00:44:44.647 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:44:44.647 [2024-07-13 11:56:19.263415] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:44:51.206 11:56:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:44:51.207 11:56:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:44:51.207 11:56:25 sw_hotplug -- common/autotest_common.sh@715 -- # time=24.54 00:44:51.207 11:56:25 sw_hotplug -- common/autotest_common.sh@716 -- # echo 24.54 00:44:51.207 11:56:25 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:44:51.207 11:56:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.54 00:44:51.207 11:56:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.54 1 00:44:51.207 remove_attach_helper took 24.54s to complete (handling 1 nvme drive(s)) 11:56:25 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:44:57.769 11:56:31 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 175118 00:44:57.769 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (175118) - No such process 00:44:57.769 11:56:31 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 175118 00:44:57.769 11:56:31 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:44:57.769 11:56:31 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:44:57.769 11:56:31 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:44:57.769 11:56:31 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=175494 00:44:57.769 11:56:31 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:44:57.769 11:56:31 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:44:57.769 11:56:31 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 175494 00:44:57.769 11:56:31 sw_hotplug -- common/autotest_common.sh@829 -- # '[' -z 175494 ']' 00:44:57.769 11:56:31 sw_hotplug -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:57.769 11:56:31 sw_hotplug -- common/autotest_common.sh@834 -- # local max_retries=100 00:44:57.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:57.769 11:56:31 sw_hotplug -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:57.769 11:56:31 sw_hotplug -- common/autotest_common.sh@838 -- # xtrace_disable 00:44:57.769 11:56:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:44:57.769 [2024-07-13 11:56:31.358594] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:44:57.769 [2024-07-13 11:56:31.358861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175494 ] 00:44:57.769 [2024-07-13 11:56:31.534215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:57.769 [2024-07-13 11:56:31.763479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@862 -- # return 0 00:44:57.769 11:56:32 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:44:57.769 11:56:32 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:44:57.769 11:56:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:44:57.769 11:56:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:44:57.769 11:56:32 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:44:57.769 11:56:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:44:57.769 11:56:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:44:57.769 11:56:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:44:57.769 11:56:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:44:57.769 11:56:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:04.327 11:56:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:04.327 11:56:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:04.327 11:56:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:04.327 11:56:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:04.586 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:04.586 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:04.586 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:04.586 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:04.586 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:04.586 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:04.586 11:56:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:04.586 11:56:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:04.586 11:56:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:04.586 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:04.586 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:05.154 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:05.154 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:05.154 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:05.154 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:05.154 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:05.154 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:05.154 11:56:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.154 11:56:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:05.154 11:56:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.154 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:05.154 11:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:05.721 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:05.722 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:05.722 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:05.722 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:05.722 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:05.722 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:05.722 11:56:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:05.722 11:56:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:05.722 11:56:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:05.722 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:05.722 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:06.288 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:06.288 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:06.288 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:06.288 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:06.288 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:06.288 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:06.288 11:56:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.288 11:56:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:06.288 11:56:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.288 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:06.288 11:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:06.855 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:06.855 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:06.855 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:06.855 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:06.855 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:06.855 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:06.855 11:56:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:06.855 11:56:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:06.855 11:56:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:06.855 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:06.855 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:07.423 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:07.423 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:07.423 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:07.423 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:07.423 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:07.423 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:07.423 11:56:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.423 11:56:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:07.423 11:56:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.423 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:07.423 11:56:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:07.990 11:56:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:07.990 11:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:07.990 11:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:07.990 11:56:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:07.990 11:56:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:07.990 11:56:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:07.990 11:56:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:07.990 11:56:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:07.990 11:56:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:07.990 11:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:07.990 11:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:08.583 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:08.583 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:08.583 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:08.583 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:08.583 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:08.583 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:08.583 11:56:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:08.583 11:56:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:08.583 11:56:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:08.583 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:08.583 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:09.173 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:09.173 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:09.173 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:09.173 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:09.173 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:09.173 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:09.173 11:56:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.173 11:56:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:09.173 11:56:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.173 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:09.173 11:56:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:09.431 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:09.431 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:09.431 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:09.431 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:09.431 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:09.431 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:09.431 11:56:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:09.431 11:56:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:09.431 11:56:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:09.690 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:09.690 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:09.690 [2024-07-13 11:56:44.297160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:09.690 [2024-07-13 11:56:44.298756] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:09.690 [2024-07-13 11:56:44.298833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:45:09.690 [2024-07-13 11:56:44.298888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:09.690 [2024-07-13 11:56:44.298927] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:09.690 [2024-07-13 11:56:44.298948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:45:09.690 [2024-07-13 11:56:44.298969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:09.690 [2024-07-13 11:56:44.299034] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:09.690 [2024-07-13 11:56:44.299075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:45:09.690 [2024-07-13 11:56:44.299093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:09.690 [2024-07-13 11:56:44.299115] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:09.690 [2024-07-13 11:56:44.299136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:45:09.690 [2024-07-13 11:56:44.299157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:10.259 11:56:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:10.259 11:56:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:10.259 11:56:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:10.259 11:56:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:16.815 11:56:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:45:16.815 11:56:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:45:16.815 11:56:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:45:16.815 11:56:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:16.815 11:56:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:16.815 11:56:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:16.815 11:56:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:16.815 11:56:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:16.815 11:56:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:16.815 11:56:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:16.815 11:56:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:16.815 11:56:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:16.815 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:17.073 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:17.073 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:17.073 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:17.073 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:17.073 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:17.073 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:17.073 11:56:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:17.073 11:56:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:17.073 11:56:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:17.073 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:17.073 11:56:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:17.639 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:17.639 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:17.639 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:17.639 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:17.639 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:17.639 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:17.639 11:56:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:17.639 11:56:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:17.639 11:56:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:17.639 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:17.639 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:18.206 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:18.207 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:18.207 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:18.207 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:18.207 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:18.207 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:18.207 11:56:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.207 11:56:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:18.207 11:56:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.207 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:18.207 11:56:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:18.771 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:18.771 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:18.771 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:18.771 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:18.771 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:18.771 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:18.771 11:56:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:18.771 11:56:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:18.771 11:56:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:18.771 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:18.771 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:19.338 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:19.338 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:19.338 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:19.338 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:19.338 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:19.338 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:19.338 11:56:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.338 11:56:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:19.338 11:56:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.338 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:19.338 11:56:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:19.904 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:19.904 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:19.904 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:19.904 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:19.904 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:19.904 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:19.904 11:56:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:19.904 11:56:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:19.904 11:56:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:19.904 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:19.904 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:20.470 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:20.470 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:20.470 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:20.470 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:20.470 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:20.470 11:56:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:20.470 11:56:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:20.470 11:56:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:20.470 11:56:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:20.470 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:20.470 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:21.037 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:21.037 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:21.037 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:21.037 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:21.037 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:21.037 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:21.037 11:56:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.037 11:56:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:21.037 11:56:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.037 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:21.037 11:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:21.605 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:21.605 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:21.605 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:21.605 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:21.605 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:21.605 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:21.605 11:56:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:21.605 11:56:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:21.605 11:56:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:21.605 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:21.605 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:22.173 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:22.173 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:22.173 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:22.173 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:22.173 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:22.173 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:22.173 11:56:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.173 11:56:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:22.173 11:56:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.173 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:22.173 11:56:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:22.740 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:22.740 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:22.740 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:22.740 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:22.740 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:22.740 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:22.740 11:56:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:22.740 11:56:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:22.740 11:56:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:22.740 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:22.740 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:22.999 [2024-07-13 11:56:57.697303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:22.999 [2024-07-13 11:56:57.698803] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:22.999 [2024-07-13 11:56:57.698878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:45:22.999 [2024-07-13 11:56:57.698908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:22.999 [2024-07-13 11:56:57.698945] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:22.999 [2024-07-13 11:56:57.698966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:45:22.999 [2024-07-13 11:56:57.699009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:22.999 [2024-07-13 11:56:57.699029] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:22.999 [2024-07-13 11:56:57.699061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:45:22.999 [2024-07-13 11:56:57.699088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:22.999 [2024-07-13 11:56:57.699123] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:22.999 [2024-07-13 11:56:57.699147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:45:22.999 [2024-07-13 11:56:57.699187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:23.258 11:56:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:23.258 11:56:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:23.258 11:56:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:23.258 11:56:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:29.821 11:57:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:45:29.822 11:57:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:45:29.822 11:57:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:45:29.822 11:57:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:29.822 11:57:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:29.822 11:57:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:29.822 11:57:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:29.822 11:57:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:29.822 11:57:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:29.822 11:57:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:29.822 11:57:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:29.822 11:57:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:29.822 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:30.080 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:30.080 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:30.080 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:30.080 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:30.080 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:30.080 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:30.080 11:57:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:30.080 11:57:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:30.080 11:57:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:30.080 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:30.080 11:57:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:30.647 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:30.648 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:30.648 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:30.648 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:30.648 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:30.648 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:30.648 11:57:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:30.648 11:57:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:30.648 11:57:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:30.648 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:30.648 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:31.215 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:31.215 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:31.215 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:31.215 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:31.215 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:31.215 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:31.215 11:57:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.215 11:57:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:31.215 11:57:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.215 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:31.215 11:57:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:31.783 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:31.783 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:31.783 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:31.783 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:31.783 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:31.783 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:31.783 11:57:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:31.783 11:57:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:31.783 11:57:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:31.783 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:31.783 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:32.351 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:32.351 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:32.351 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:32.351 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:32.351 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:32.351 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:32.351 11:57:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.351 11:57:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:32.351 11:57:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.351 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:32.351 11:57:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:32.918 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:32.918 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:32.918 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:32.918 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:32.918 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:32.918 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:32.918 11:57:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:32.918 11:57:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:32.918 11:57:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:32.918 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:32.918 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:33.485 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:33.485 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:33.485 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:33.485 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:33.485 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:33.485 11:57:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:33.485 11:57:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:33.485 11:57:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:33.485 11:57:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:33.485 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:33.485 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:34.052 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:34.052 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:34.052 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:34.052 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:34.052 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:34.052 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:34.052 11:57:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:34.052 11:57:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:34.052 11:57:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:34.052 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:34.052 11:57:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:34.619 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:34.619 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:34.619 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:34.619 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:34.620 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:34.620 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:34.620 11:57:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:34.620 11:57:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:34.620 11:57:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:34.620 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:34.620 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:34.913 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:34.913 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:34.913 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:35.172 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:35.172 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:35.172 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:35.172 11:57:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.172 11:57:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:35.172 11:57:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.172 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:35.172 11:57:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:35.739 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:35.739 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:35.739 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:35.740 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:35.740 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:35.740 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:35.740 11:57:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:35.740 11:57:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:35.740 11:57:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:35.740 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:35.740 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:36.306 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:36.306 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:36.306 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:36.306 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:36.306 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:36.306 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:36.306 11:57:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.306 11:57:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:36.306 11:57:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.306 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:36.306 11:57:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:36.872 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:36.873 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:36.873 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:36.873 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:36.873 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:36.873 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:36.873 11:57:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:36.873 11:57:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:36.873 11:57:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:36.873 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:36.873 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:37.437 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:37.437 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:37.437 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:37.437 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:37.437 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:37.437 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:37.437 11:57:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:37.437 11:57:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:37.437 11:57:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:37.437 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:37.438 11:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:38.004 11:57:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:38.004 11:57:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:38.004 11:57:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:38.004 11:57:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:38.004 11:57:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:38.004 11:57:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:38.004 11:57:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.004 11:57:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:38.004 11:57:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.004 11:57:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:38.004 11:57:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:38.570 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:38.570 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:38.570 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:38.570 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:38.570 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:38.570 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:38.570 11:57:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:38.570 11:57:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:38.570 11:57:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:38.570 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:38.570 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:39.136 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:39.136 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:39.136 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:39.136 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:39.136 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:39.136 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:39.136 11:57:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.136 11:57:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:39.136 11:57:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.136 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:39.136 11:57:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:39.702 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:39.702 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:39.702 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:39.702 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:39.702 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:39.702 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:39.702 11:57:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:39.702 11:57:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:39.702 11:57:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:39.702 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:39.702 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:40.268 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:40.268 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:40.268 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:40.268 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:40.268 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:40.268 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:40.268 11:57:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:40.268 11:57:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:40.268 11:57:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:40.268 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:40.268 11:57:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:40.887 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:40.887 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:40.887 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:40.887 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:40.887 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:40.887 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:40.887 11:57:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:40.887 11:57:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:40.887 11:57:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:40.887 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:40.887 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:41.154 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:41.154 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:41.154 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:41.154 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:41.154 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:41.154 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:41.154 11:57:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.154 11:57:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:41.154 11:57:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.413 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:41.413 11:57:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:41.671 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:41.671 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:41.671 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:41.671 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:41.671 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:41.671 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:41.671 11:57:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:41.671 11:57:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:41.930 11:57:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:41.930 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:41.930 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:42.497 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:42.497 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:42.497 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:42.497 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:42.497 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:42.497 11:57:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:42.497 11:57:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:42.497 11:57:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:42.497 11:57:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:42.497 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:42.497 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:43.064 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:43.064 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:43.064 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:43.064 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:43.064 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:43.064 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:43.064 11:57:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:43.064 11:57:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:43.064 11:57:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:43.064 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:43.064 11:57:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:43.631 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:43.631 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:43.631 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:43.631 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:43.631 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:43.631 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:43.631 11:57:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:43.631 11:57:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:43.631 11:57:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:43.631 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:43.631 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:44.197 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:44.197 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:44.197 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:44.197 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:44.197 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:44.197 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:44.197 11:57:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.197 11:57:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:44.197 11:57:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.197 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:44.197 11:57:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:44.764 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:44.764 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:44.764 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:44.764 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:44.764 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:44.764 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:44.764 11:57:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:44.764 11:57:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:44.764 11:57:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:44.764 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:44.764 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:45.332 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:45.332 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:45.332 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:45.332 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:45.332 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:45.332 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:45.332 11:57:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.332 11:57:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:45.332 11:57:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:45.332 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:45.332 11:57:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:45.590 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:45.590 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:45.590 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:45.848 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:45.848 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:45.848 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:45.848 11:57:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:45.848 11:57:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:45.848 11:57:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:45.848 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:45.848 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:46.416 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:46.416 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:46.416 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:46.416 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:46.416 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:46.416 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:46.416 11:57:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:46.416 11:57:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:46.416 11:57:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:46.416 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:46.416 11:57:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:46.982 11:57:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:46.982 11:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:46.982 11:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:46.982 11:57:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:46.982 11:57:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:46.982 11:57:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:46.982 11:57:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:46.982 11:57:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:46.982 11:57:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:46.982 11:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:46.982 11:57:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:47.551 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:47.551 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:47.551 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:47.551 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:47.551 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:47.551 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:47.551 11:57:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:47.551 11:57:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:47.551 11:57:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:47.551 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:47.551 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:48.121 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:48.121 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:48.121 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:48.121 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:48.121 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:48.121 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:48.121 11:57:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:48.121 11:57:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:48.121 11:57:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:48.121 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:45:48.121 11:57:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:45:48.121 [2024-07-13 11:57:22.697702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:45:48.121 [2024-07-13 11:57:22.699193] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:48.121 [2024-07-13 11:57:22.699249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:45:48.121 [2024-07-13 11:57:22.699281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:48.121 [2024-07-13 11:57:22.699315] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:48.121 [2024-07-13 11:57:22.699357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:45:48.121 [2024-07-13 11:57:22.699382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:48.121 [2024-07-13 11:57:22.699402] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:48.121 [2024-07-13 11:57:22.699428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:45:48.121 [2024-07-13 11:57:22.699447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:48.121 [2024-07-13 11:57:22.699471] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:45:48.121 [2024-07-13 11:57:22.699499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:45:48.121 [2024-07-13 11:57:22.699522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:48.686 11:57:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:48.686 11:57:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:48.686 11:57:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:45:48.686 11:57:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@715 -- # time=56.92 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@716 -- # echo 56.92 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=56.92 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 56.92 1 00:45:55.247 remove_attach_helper took 56.92s to complete (handling 1 nvme drive(s)) 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@705 -- # local cmd_es=0 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@707 -- # [[ -t 0 ]] 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@707 -- # exec 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@709 -- # local time=0 TIMEFORMAT=%2R 00:45:55.247 11:57:29 sw_hotplug -- common/autotest_common.sh@715 -- # remove_attach_helper 3 6 true 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:45:55.247 11:57:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:01.803 11:57:35 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:01.803 11:57:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:01.803 11:57:35 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:01.803 11:57:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:01.803 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:01.803 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:01.803 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:01.803 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:01.803 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:01.803 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:01.803 11:57:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:01.803 11:57:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:01.803 11:57:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:01.803 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:01.803 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:02.062 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:02.062 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:02.062 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:02.062 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:02.062 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:02.062 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:02.062 11:57:36 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:02.062 11:57:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:02.062 11:57:36 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:02.062 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:02.062 11:57:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:02.629 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:02.629 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:02.629 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:02.629 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:02.629 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:02.629 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:02.629 11:57:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:02.629 11:57:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:02.629 11:57:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:02.629 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:02.629 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:03.195 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:03.195 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:03.195 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:03.195 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:03.195 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:03.195 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:03.195 11:57:37 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.195 11:57:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:03.195 11:57:37 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.195 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:03.195 11:57:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:03.762 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:03.762 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:03.762 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:03.762 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:03.762 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:03.763 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:03.763 11:57:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:03.763 11:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:03.763 11:57:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:03.763 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:03.763 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:04.331 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:04.331 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:04.331 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:04.331 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:04.331 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:04.331 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:04.331 11:57:38 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:04.331 11:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:04.331 11:57:38 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:04.331 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:04.331 11:57:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:04.900 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:04.900 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:04.900 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:04.900 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:04.900 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:04.900 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:04.900 11:57:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:04.900 11:57:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:04.900 11:57:39 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:04.900 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:04.900 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:05.468 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:05.468 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:05.468 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:05.468 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:05.468 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:05.468 11:57:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:05.468 11:57:39 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:05.468 11:57:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:05.468 11:57:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:05.468 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:05.468 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:06.035 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:06.035 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:06.035 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:06.035 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:06.035 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:06.035 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:06.035 11:57:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:06.035 11:57:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:06.036 11:57:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:06.036 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:06.036 11:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:06.603 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:06.603 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:06.603 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:06.603 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:06.603 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:06.603 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:06.603 11:57:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:06.603 11:57:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:06.603 11:57:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:06.603 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:06.603 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:07.170 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:07.170 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:07.170 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:07.170 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:07.170 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:07.170 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:07.170 11:57:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:07.170 11:57:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:07.170 11:57:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:07.170 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:07.170 11:57:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:07.737 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:07.737 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:07.737 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:07.737 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:07.737 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:07.737 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:07.737 11:57:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:07.737 11:57:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:07.737 11:57:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:07.737 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:07.737 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:08.302 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:08.302 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:08.302 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:08.302 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:08.302 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:08.302 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:08.302 11:57:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:08.302 11:57:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:08.302 11:57:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:08.302 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:08.302 11:57:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:08.865 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:08.866 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:08.866 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:08.866 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:08.866 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:08.866 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:08.866 11:57:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:08.866 11:57:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:08.866 11:57:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:08.866 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:08.866 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:09.431 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:09.431 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:09.431 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:09.431 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:09.431 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:09.431 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:09.431 11:57:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:09.431 11:57:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:09.431 11:57:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:09.431 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:09.431 11:57:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:09.996 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:09.996 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:09.996 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:09.996 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:09.996 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:09.996 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:09.996 11:57:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:09.996 11:57:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:09.996 11:57:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:09.996 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:09.996 11:57:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:10.562 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:10.562 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:10.562 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:10.562 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:10.562 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:10.562 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:10.562 11:57:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:10.562 11:57:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:10.562 11:57:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:10.562 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:10.562 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:11.128 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:11.128 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:11.128 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:11.128 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:11.128 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:11.128 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:11.128 11:57:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:11.128 11:57:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:11.128 11:57:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:11.128 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:11.128 11:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:11.694 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:11.694 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:11.694 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:11.694 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:11.694 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:11.694 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:11.694 11:57:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:11.694 11:57:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:11.694 11:57:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:11.694 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:11.694 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:12.261 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:12.261 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:12.261 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:12.261 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:12.261 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:12.261 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:12.261 11:57:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:12.261 11:57:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:12.261 11:57:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:12.261 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:12.261 11:57:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:12.829 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:12.829 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:12.829 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:12.829 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:12.829 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:12.829 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:12.829 11:57:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:12.829 11:57:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:12.829 11:57:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:12.829 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:12.829 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:13.396 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:13.396 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:13.396 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:13.396 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:13.396 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:13.396 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:13.396 11:57:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:13.396 11:57:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:13.396 11:57:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:13.397 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:13.397 11:57:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:14.046 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:14.046 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:14.046 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:14.046 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:14.046 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:14.046 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:14.046 11:57:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:14.046 11:57:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:14.046 11:57:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:14.046 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:14.046 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:14.315 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:14.315 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:14.315 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:14.315 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:14.315 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:14.315 11:57:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:14.315 11:57:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:14.315 11:57:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:14.315 11:57:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:14.315 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:14.315 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:14.882 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:14.882 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:14.882 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:14.882 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:14.882 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:14.882 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:14.882 11:57:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:14.882 11:57:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:14.882 11:57:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:14.882 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:14.882 11:57:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:15.449 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:15.449 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:15.449 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:15.449 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:15.449 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:15.449 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:15.449 11:57:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:15.449 11:57:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:15.449 11:57:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:15.449 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:15.449 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:16.016 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:16.016 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:16.016 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:16.016 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:16.016 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:16.016 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:16.016 11:57:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.016 11:57:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:16.016 11:57:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.016 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:16.016 11:57:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:16.583 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:16.583 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:16.583 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:16.583 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:16.583 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:16.583 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:16.583 11:57:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:16.583 11:57:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:16.583 11:57:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:16.583 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:16.583 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:17.151 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:17.151 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:17.151 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:17.151 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:17.151 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:17.151 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:17.151 11:57:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:17.151 11:57:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:17.151 11:57:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:17.151 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:17.151 11:57:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:17.719 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:17.719 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:17.719 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:17.719 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:17.719 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:17.719 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:17.719 11:57:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:17.719 11:57:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:17.719 11:57:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:17.719 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:17.719 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:18.284 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:18.284 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:18.284 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:18.284 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:18.284 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:18.284 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:18.284 11:57:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:18.284 11:57:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:18.284 11:57:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:18.284 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:18.284 11:57:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:18.850 11:57:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:18.850 11:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:18.850 11:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:18.850 11:57:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:18.850 11:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:18.850 11:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:18.850 11:57:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:18.850 11:57:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:18.850 11:57:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:18.850 11:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:18.850 11:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:19.417 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:19.417 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:19.417 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:19.417 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:19.417 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:19.417 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:19.417 11:57:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:19.417 11:57:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:19.417 11:57:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:19.417 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:19.417 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:19.984 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:19.984 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:19.984 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:19.984 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:19.984 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:19.984 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:19.984 11:57:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:19.984 11:57:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:19.984 11:57:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:19.984 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:19.984 11:57:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:20.548 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:20.548 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:20.548 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:20.548 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:20.548 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:20.548 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:20.548 11:57:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:20.548 11:57:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:20.548 11:57:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:20.548 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:20.548 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:21.115 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:21.115 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:21.115 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:21.115 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:21.115 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:21.115 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:21.115 11:57:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:21.115 11:57:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:21.115 11:57:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:21.115 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:21.115 11:57:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:21.681 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:21.681 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:21.681 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:21.681 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:21.681 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:21.681 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:21.681 11:57:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:21.682 11:57:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:21.682 11:57:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:21.682 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:21.682 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:22.248 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:22.248 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:22.248 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:22.248 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:22.248 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:22.248 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:22.248 11:57:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:22.248 11:57:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:22.248 11:57:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:22.248 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:22.248 11:57:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:22.814 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:22.814 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:22.814 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:22.814 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:22.814 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:22.814 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:22.814 11:57:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:22.814 11:57:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:22.814 11:57:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:22.814 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:22.814 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:23.381 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:23.381 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:23.381 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:23.381 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:23.381 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:23.381 11:57:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:23.381 11:57:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:23.381 11:57:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:23.381 11:57:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:23.381 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:23.381 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:23.948 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:23.948 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:23.948 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:23.948 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:23.948 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:23.948 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:23.948 11:57:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:23.948 11:57:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:23.948 11:57:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:23.948 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:23.948 11:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:24.206 [2024-07-13 11:57:58.941352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:46:24.206 [2024-07-13 11:57:58.942685] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:24.206 [2024-07-13 11:57:58.942844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:46:24.206 [2024-07-13 11:57:58.942968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:24.206 [2024-07-13 11:57:58.943113] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:24.206 [2024-07-13 11:57:58.943163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:46:24.206 [2024-07-13 11:57:58.943268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:24.206 [2024-07-13 11:57:58.943327] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:24.206 [2024-07-13 11:57:58.943436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:46:24.206 [2024-07-13 11:57:58.943489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:24.206 [2024-07-13 11:57:58.943592] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:46:24.206 [2024-07-13 11:57:58.943638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:46:24.206 [2024-07-13 11:57:58.943671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:24.465 11:57:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:24.465 11:57:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:24.465 11:57:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:46:24.465 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:46:24.723 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:46:24.723 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:46:24.723 11:57:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:31.283 11:58:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.283 11:58:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:31.283 11:58:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:31.283 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:31.283 11:58:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.284 11:58:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:31.284 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:31.284 11:58:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.284 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:31.284 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:31.284 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:31.284 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:31.284 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:31.284 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:31.284 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:31.284 11:58:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:31.284 11:58:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.284 11:58:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:31.284 11:58:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:31.542 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:31.542 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:31.801 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:31.801 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:31.801 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:31.801 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:31.801 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:31.801 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:31.801 11:58:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:31.801 11:58:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:32.060 11:58:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:32.060 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:32.060 11:58:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:32.628 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:32.628 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:32.628 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:32.628 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:32.628 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:32.628 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:32.628 11:58:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:32.628 11:58:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:32.628 11:58:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:32.628 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:32.628 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:33.195 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:33.195 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:33.195 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:33.195 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:33.195 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:33.195 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:33.195 11:58:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.195 11:58:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:33.195 11:58:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.195 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:33.195 11:58:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:33.762 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:33.762 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:33.762 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:33.762 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:33.762 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:33.762 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:33.762 11:58:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:33.762 11:58:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:33.762 11:58:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:33.762 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:33.762 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:34.332 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:34.332 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:34.332 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:34.332 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:34.332 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:34.332 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:34.332 11:58:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:34.332 11:58:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:34.332 11:58:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:34.332 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:34.332 11:58:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:34.899 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:34.899 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:34.899 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:34.899 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:34.899 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:34.899 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:34.899 11:58:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:34.899 11:58:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:34.899 11:58:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:34.899 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:34.899 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:35.466 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:35.466 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:35.466 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:35.466 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:35.466 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:35.466 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:35.466 11:58:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:35.466 11:58:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:35.466 11:58:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:35.466 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:35.466 11:58:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:35.726 11:58:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:35.726 11:58:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:35.726 11:58:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:35.726 11:58:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:35.726 11:58:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:35.726 11:58:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:35.984 11:58:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:35.985 11:58:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:35.985 11:58:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:35.985 11:58:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:35.985 11:58:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:36.552 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:36.552 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:36.552 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:36.552 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:36.552 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:36.552 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:36.552 11:58:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:36.552 11:58:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:36.552 11:58:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:36.552 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:36.552 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:37.120 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:37.120 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:37.120 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:37.120 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:37.120 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:37.120 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:37.120 11:58:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:37.120 11:58:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:37.120 11:58:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:37.121 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:37.121 11:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:37.695 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:37.695 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:37.695 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:37.695 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:37.695 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:37.695 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:37.695 11:58:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:37.695 11:58:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:37.695 11:58:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:37.695 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:37.695 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:38.261 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:38.261 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:38.261 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:38.261 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:38.261 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:38.261 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:38.261 11:58:12 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:38.261 11:58:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:38.261 11:58:12 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:38.261 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:38.261 11:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:38.825 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:38.825 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:38.825 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:38.825 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:38.825 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:38.825 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:38.825 11:58:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:38.825 11:58:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:38.825 11:58:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:38.825 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:38.825 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:39.391 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:39.391 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:39.391 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:39.391 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:39.391 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:39.391 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:39.391 11:58:13 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.391 11:58:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:39.391 11:58:13 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.391 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:39.391 11:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:39.957 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:39.957 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:39.957 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:39.957 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:39.957 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:39.957 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:39.957 11:58:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:39.957 11:58:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:39.957 11:58:14 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:39.957 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:39.957 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:40.550 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:40.550 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:40.550 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:40.550 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:40.550 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:40.550 11:58:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:40.550 11:58:14 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.550 11:58:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:40.550 11:58:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:40.550 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:40.550 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:40.809 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:40.809 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:40.809 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:40.809 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:40.809 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:40.809 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:40.809 11:58:15 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:40.809 11:58:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:41.067 11:58:15 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:41.067 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:41.067 11:58:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:41.633 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:41.633 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:41.633 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:41.633 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:41.633 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:41.633 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:41.633 11:58:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:41.633 11:58:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:41.633 11:58:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:41.633 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:41.633 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:42.198 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:42.198 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:42.198 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:42.198 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:42.198 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:42.198 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:42.198 11:58:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:42.198 11:58:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:42.198 11:58:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:42.198 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:42.198 11:58:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:42.775 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:42.775 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:42.775 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:42.775 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:42.775 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:42.775 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:42.775 11:58:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:42.775 11:58:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:42.775 11:58:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:42.775 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:42.775 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:43.349 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:43.349 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:43.349 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:43.349 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:43.349 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:43.349 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:43.349 11:58:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:43.349 11:58:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:43.349 11:58:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:43.349 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:43.349 11:58:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:43.914 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:43.914 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:43.914 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:43.914 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:43.914 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:43.914 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:43.914 11:58:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:43.914 11:58:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:43.914 11:58:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:43.914 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:43.914 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:44.172 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:44.172 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:44.172 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:44.172 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:44.172 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:44.172 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:44.172 11:58:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:44.172 11:58:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:44.430 11:58:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:44.430 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:44.430 11:58:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:44.997 11:58:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:44.997 11:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:44.997 11:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:44.997 11:58:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:44.997 11:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:44.997 11:58:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:44.997 11:58:19 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:44.997 11:58:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:44.997 11:58:19 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:44.997 11:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:44.997 11:58:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:45.564 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:45.564 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:45.564 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:45.564 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:45.564 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:45.564 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:45.564 11:58:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:45.564 11:58:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:45.564 11:58:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:45.564 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:45.564 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:46.131 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:46.131 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:46.131 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:46.131 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:46.131 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:46.131 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:46.131 11:58:20 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:46.131 11:58:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:46.131 11:58:20 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:46.131 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:46.131 11:58:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:46.698 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:46.698 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:46.698 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:46.698 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:46.698 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:46.698 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:46.698 11:58:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:46.698 11:58:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:46.698 11:58:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:46.698 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:46.698 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:47.264 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:47.264 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:47.264 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:47.264 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:47.264 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:47.264 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:47.264 11:58:21 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:47.264 11:58:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:47.264 11:58:21 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:47.264 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:47.264 11:58:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:47.830 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:47.830 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:47.830 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:47.830 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:47.830 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:47.830 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:47.830 11:58:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:47.830 11:58:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:47.830 11:58:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:47.830 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:47.830 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:48.396 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:48.396 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:48.396 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:48.396 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:48.396 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:48.396 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:48.396 11:58:22 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:48.396 11:58:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:48.396 11:58:22 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:48.396 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:48.396 11:58:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:48.654 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:48.654 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:48.654 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:48.912 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:48.912 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:48.912 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:48.912 11:58:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:48.912 11:58:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:48.912 11:58:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:48.912 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:48.912 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:49.478 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:49.478 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:49.478 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:49.478 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:49.478 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:49.478 11:58:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:49.478 11:58:23 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:49.478 11:58:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:49.478 11:58:23 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:49.478 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:49.478 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:50.046 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:50.046 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:50.046 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:50.046 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:50.046 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:50.046 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:50.046 11:58:24 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:50.046 11:58:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:50.046 11:58:24 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:50.046 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:50.046 11:58:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:50.614 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:50.614 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:50.614 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:50.614 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:50.614 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:50.614 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:50.614 11:58:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:50.614 11:58:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:50.614 11:58:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:50.614 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:50.614 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:51.181 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:51.181 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:51.181 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:51.181 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:51.181 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:51.181 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:51.181 11:58:25 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:51.181 11:58:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:51.181 11:58:25 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:51.181 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:51.181 11:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:51.749 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:51.749 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:51.749 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:51.749 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:51.749 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:51.749 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:51.749 11:58:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:51.749 11:58:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:51.749 11:58:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:51.749 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:51.749 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:52.318 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:52.318 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:52.318 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:52.318 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:52.318 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:52.318 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:52.318 11:58:26 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:52.318 11:58:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:52.318 11:58:26 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:52.318 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:52.318 11:58:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:52.886 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:52.886 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:52.886 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:52.886 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:52.886 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:52.886 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:52.886 11:58:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:52.886 11:58:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:52.886 11:58:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:52.886 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:52.886 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:53.145 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:53.145 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:53.145 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:53.403 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:53.404 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:53.404 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:53.404 11:58:27 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:53.404 11:58:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:53.404 11:58:27 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:53.404 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:53.404 11:58:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:53.971 11:58:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:53.971 11:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:53.971 11:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:53.971 11:58:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:53.971 11:58:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:53.971 11:58:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:53.971 11:58:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:53.971 11:58:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:53.971 11:58:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:53.971 11:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:53.971 11:58:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:54.539 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:54.539 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:54.539 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:54.539 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:54.540 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:54.540 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:54.540 11:58:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:54.540 11:58:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:54.540 11:58:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:54.540 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:54.540 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:55.107 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:55.107 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:55.107 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:55.107 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:55.107 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:55.107 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:55.107 11:58:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:55.107 11:58:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:55.107 11:58:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:55.107 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:55.107 11:58:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:55.673 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:55.673 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:55.673 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:55.673 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:55.673 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:55.673 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:55.673 11:58:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:55.673 11:58:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:55.673 11:58:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:55.673 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:55.673 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:56.241 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:56.241 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:56.241 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:56.241 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:56.241 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:56.241 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:56.241 11:58:30 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:56.241 11:58:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:56.241 11:58:30 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:56.241 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:56.241 11:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:56.809 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:56.809 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:56.809 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:56.809 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:56.809 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:56.809 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:56.809 11:58:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:56.809 11:58:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:56.809 11:58:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:56.809 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:56.809 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:57.377 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:57.377 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:57.377 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:57.377 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:57.377 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:57.377 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:57.377 11:58:31 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:57.377 11:58:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:57.377 11:58:31 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:57.377 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:57.377 11:58:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:57.945 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:57.945 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:57.945 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:57.945 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:57.946 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:57.946 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:57.946 11:58:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:57.946 11:58:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:57.946 11:58:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:57.946 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:57.946 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:58.220 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:58.220 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:58.220 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:58.220 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:58.220 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:58.220 11:58:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:58.221 11:58:32 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:58.221 11:58:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:58.479 11:58:32 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:58.479 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:58.479 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:59.045 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:59.045 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:59.045 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:59.045 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:59.045 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:59.045 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:59.045 11:58:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:59.045 11:58:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:59.045 11:58:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:59.045 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:59.045 11:58:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:46:59.623 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:46:59.623 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:46:59.623 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:46:59.623 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:46:59.623 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:46:59.623 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:46:59.623 11:58:34 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:46:59.623 11:58:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:46:59.623 11:58:34 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:46:59.623 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:46:59.623 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:00.189 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:00.189 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:00.189 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:00.189 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:00.189 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:00.189 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:00.189 11:58:34 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:00.189 11:58:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:00.189 11:58:34 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:00.189 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:00.189 11:58:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:00.447 [2024-07-13 11:58:35.141912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:47:00.447 [2024-07-13 11:58:35.145385] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:00.447 [2024-07-13 11:58:35.145760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:47:00.447 [2024-07-13 11:58:35.146023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:00.447 [2024-07-13 11:58:35.146269] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:00.447 [2024-07-13 11:58:35.146503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:47:00.447 [2024-07-13 11:58:35.146730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:00.447 [2024-07-13 11:58:35.146997] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:00.447 [2024-07-13 11:58:35.147116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:47:00.447 [2024-07-13 11:58:35.147331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:00.447 [2024-07-13 11:58:35.147431] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:00.447 [2024-07-13 11:58:35.147518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:47:00.447 [2024-07-13 11:58:35.147590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:00.705 11:58:35 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:00.705 11:58:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:00.705 11:58:35 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:00.705 11:58:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:47:07.261 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:47:07.261 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:47:07.261 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:47:07.261 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:07.261 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:07.261 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:07.261 11:58:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.262 11:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:07.262 11:58:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:07.262 11:58:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.262 11:58:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:07.262 11:58:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:07.262 11:58:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:07.520 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:07.520 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:07.520 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:07.520 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:07.520 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:07.520 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:07.520 11:58:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:07.520 11:58:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:07.520 11:58:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:07.520 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:07.520 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:08.087 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:08.087 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:08.087 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:08.087 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:08.087 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:08.087 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:08.087 11:58:42 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:08.087 11:58:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:08.087 11:58:42 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:08.087 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:08.087 11:58:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:08.653 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:08.653 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:08.653 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:08.653 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:08.653 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:08.653 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:08.653 11:58:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:08.653 11:58:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:08.653 11:58:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:08.653 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:08.653 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:09.219 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:09.219 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:09.219 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:09.219 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:09.219 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:09.219 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:09.219 11:58:43 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:09.219 11:58:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:09.219 11:58:43 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:09.219 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:09.219 11:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:09.786 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:09.786 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:09.786 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:09.786 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:09.786 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:09.786 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:09.786 11:58:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:09.786 11:58:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:09.786 11:58:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:09.786 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:09.786 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:10.352 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:10.352 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:10.352 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:10.352 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:10.352 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:10.352 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:10.352 11:58:44 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.352 11:58:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:10.352 11:58:44 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.352 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:10.352 11:58:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:10.919 11:58:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:10.919 11:58:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:10.919 11:58:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:10.919 11:58:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:10.919 11:58:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:10.919 11:58:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:10.919 11:58:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:10.919 11:58:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:10.919 11:58:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:10.919 11:58:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:10.919 11:58:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:11.488 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:11.488 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:11.488 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:11.488 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:11.488 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:11.488 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:11.488 11:58:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:11.488 11:58:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:11.488 11:58:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:11.488 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:11.488 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:12.093 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:12.093 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:12.093 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:12.093 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:12.093 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:12.093 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:12.093 11:58:46 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.093 11:58:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:12.093 11:58:46 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.093 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:12.093 11:58:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:12.661 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:12.661 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:12.661 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:12.661 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:12.661 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:12.661 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:12.661 11:58:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:12.661 11:58:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:12.661 11:58:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:12.661 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:12.661 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:13.229 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:13.229 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:13.229 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:13.229 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:13.229 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:13.229 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:13.229 11:58:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.229 11:58:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:13.229 11:58:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.229 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:13.229 11:58:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:13.797 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:13.797 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:13.797 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:13.797 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:13.797 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:13.797 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:13.797 11:58:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:13.797 11:58:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:13.797 11:58:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:13.797 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:13.797 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:14.366 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:14.366 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:14.366 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:14.366 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:14.366 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:14.366 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:14.366 11:58:48 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.366 11:58:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:14.366 11:58:48 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.366 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:14.366 11:58:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:14.624 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:14.624 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:14.624 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:14.883 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:14.883 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:14.883 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:14.883 11:58:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:14.883 11:58:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:14.883 11:58:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:14.883 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:14.883 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:15.450 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:15.450 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:15.450 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:15.450 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:15.450 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:15.450 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:15.450 11:58:49 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:15.450 11:58:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:15.450 11:58:49 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:15.450 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:15.450 11:58:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:16.015 11:58:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:16.015 11:58:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:16.015 11:58:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:16.015 11:58:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:16.015 11:58:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:16.015 11:58:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:16.015 11:58:50 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.015 11:58:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:16.015 11:58:50 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.015 11:58:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:16.015 11:58:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:16.580 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:16.580 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:16.580 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:16.580 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:16.580 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:16.580 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:16.580 11:58:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:16.580 11:58:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:16.580 11:58:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:16.580 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:16.580 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:17.147 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:17.147 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:17.147 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:17.147 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:17.147 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:17.147 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:17.147 11:58:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:17.147 11:58:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:17.147 11:58:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:17.147 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:17.147 11:58:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:17.713 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:17.713 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:17.713 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:17.713 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:17.713 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:17.713 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:17.713 11:58:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:17.713 11:58:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:17.713 11:58:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:17.713 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:17.713 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:18.279 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:18.279 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:18.279 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:18.279 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:18.279 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:18.279 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:18.279 11:58:52 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:18.279 11:58:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:18.279 11:58:52 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:18.279 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:18.279 11:58:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:18.845 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:18.845 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:18.845 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:18.845 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:18.845 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:18.845 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:18.845 11:58:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:18.845 11:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:18.845 11:58:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:18.845 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:18.845 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:19.410 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:19.410 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:19.410 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:19.410 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:19.410 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:19.410 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:19.410 11:58:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:19.410 11:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:19.410 11:58:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:19.410 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:19.410 11:58:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:19.976 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:19.976 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:19.976 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:19.976 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:19.976 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:19.976 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:19.976 11:58:54 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:19.976 11:58:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:19.976 11:58:54 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:19.976 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:19.976 11:58:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:20.542 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:20.542 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:20.542 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:20.542 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:20.542 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:20.542 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:20.542 11:58:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:20.542 11:58:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:20.542 11:58:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:20.542 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:20.542 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:21.108 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:21.108 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:21.108 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:21.108 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:21.108 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:21.108 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:21.108 11:58:55 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:21.108 11:58:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:21.108 11:58:55 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:21.108 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:21.108 11:58:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:21.674 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:21.674 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:21.674 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:21.674 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:21.674 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:21.674 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:21.674 11:58:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:21.674 11:58:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:21.674 11:58:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:21.674 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:21.674 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:22.242 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:22.242 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:22.242 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:22.242 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:22.242 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:22.242 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:22.242 11:58:56 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:22.242 11:58:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:22.242 11:58:56 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:22.242 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:22.242 11:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:22.809 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:22.809 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:22.809 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:22.809 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:22.809 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:22.809 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:22.809 11:58:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:22.809 11:58:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:22.809 11:58:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:22.809 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:22.809 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:23.377 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:23.377 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:23.377 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:23.377 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:23.377 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:23.377 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:23.377 11:58:57 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:23.377 11:58:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:23.377 11:58:57 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:23.377 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:23.377 11:58:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:23.944 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:23.944 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:23.944 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:23.944 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:23.944 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:23.944 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:23.944 11:58:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:23.944 11:58:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:23.944 11:58:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:23.944 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:23.944 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:24.511 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:24.511 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:24.511 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:24.511 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:24.511 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:24.511 11:58:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:24.511 11:58:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:24.511 11:58:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:24.511 11:58:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:24.511 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:24.511 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:24.770 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:24.770 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:24.770 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:25.029 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:25.029 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:25.029 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:25.029 11:58:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:25.029 11:58:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:25.029 11:58:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:25.029 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:25.029 11:58:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:25.596 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:25.596 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:25.596 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:25.596 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:25.596 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:25.596 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:25.596 11:59:00 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:25.596 11:59:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:25.596 11:59:00 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:25.596 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:25.596 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:26.163 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:26.163 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:26.163 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:26.163 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:26.163 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:26.163 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:26.163 11:59:00 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:26.163 11:59:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:26.163 11:59:00 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:26.163 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:26.163 11:59:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:26.730 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:26.730 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:26.730 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:26.730 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:26.730 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:26.730 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:26.730 11:59:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:26.730 11:59:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:26.730 11:59:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:26.730 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:26.730 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:27.302 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:27.302 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:27.302 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:27.302 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:27.302 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:27.302 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:27.302 11:59:01 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:27.302 11:59:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:27.302 11:59:01 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:27.302 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:27.302 11:59:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:27.867 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:27.867 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:27.867 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:27.867 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:27.867 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:27.867 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:27.867 11:59:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:27.867 11:59:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:27.867 11:59:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:27.867 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:27.867 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:28.432 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:28.432 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:28.432 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:28.432 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:28.432 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:28.432 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:28.432 11:59:02 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:28.432 11:59:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:28.432 11:59:02 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:28.432 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:28.432 11:59:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:28.997 11:59:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:28.997 11:59:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:28.997 11:59:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:28.997 11:59:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:28.997 11:59:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:28.997 11:59:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:28.997 11:59:03 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:28.997 11:59:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:28.997 11:59:03 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:28.997 11:59:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:28.997 11:59:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:29.562 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:29.562 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:29.562 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:29.562 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:29.562 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:29.562 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:29.562 11:59:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:29.562 11:59:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:29.562 11:59:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:29.562 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:29.562 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:30.128 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:30.128 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:30.128 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:30.128 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:30.128 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:30.128 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:30.128 11:59:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:30.128 11:59:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:30.128 11:59:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:30.128 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:30.128 11:59:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:30.694 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:30.694 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:30.694 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:30.694 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:30.694 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:30.694 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:30.694 11:59:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:30.694 11:59:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:30.694 11:59:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:30.694 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:30.694 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:31.260 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:31.260 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:31.260 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:31.260 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:31.260 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:31.260 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:31.260 11:59:05 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:31.260 11:59:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:31.260 11:59:05 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:31.260 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:31.260 11:59:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:31.826 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:31.826 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:31.826 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:31.826 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:31.826 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:31.826 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:31.826 11:59:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:31.826 11:59:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:31.826 11:59:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:31.826 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:31.826 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:32.084 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:32.084 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:32.084 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:32.084 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:32.084 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:32.084 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:32.085 11:59:06 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.085 11:59:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:32.343 11:59:06 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.343 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:32.343 11:59:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:32.910 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:32.910 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:32.910 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:32.910 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:32.910 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:32.910 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:32.910 11:59:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:32.910 11:59:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:32.910 11:59:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:32.910 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:32.910 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:33.501 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:33.501 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:33.501 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:33.501 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:33.501 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:33.501 11:59:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:33.501 11:59:07 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:33.501 11:59:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:33.501 11:59:07 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:33.501 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:33.501 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:34.068 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:34.068 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:34.069 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:34.069 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:34.069 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:34.069 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:34.069 11:59:08 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.069 11:59:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:34.069 11:59:08 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.069 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:34.069 11:59:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:34.327 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:34.327 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:34.327 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:34.327 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:34.327 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:34.587 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:34.587 11:59:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:34.587 11:59:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:34.587 11:59:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:34.587 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:34.587 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:35.157 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:35.157 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:35.157 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:35.157 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:35.157 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:35.157 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:35.157 11:59:09 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.157 11:59:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:35.157 11:59:09 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.157 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:35.157 11:59:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:35.724 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:35.724 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:35.724 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:35.724 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:35.724 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:35.724 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:35.724 11:59:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:35.724 11:59:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:35.724 11:59:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:35.724 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:35.724 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:36.292 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:36.293 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:36.293 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:36.293 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:36.293 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:36.293 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:36.293 11:59:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.293 11:59:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:36.293 11:59:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.293 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:36.293 11:59:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:36.865 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:36.865 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:36.865 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:36.865 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:36.865 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:36.865 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:36.865 11:59:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:36.865 11:59:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:36.865 11:59:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:36.865 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:47:36.865 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:47:36.865 [2024-07-13 11:59:11.542519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:47:36.865 [2024-07-13 11:59:11.543900] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:36.865 [2024-07-13 11:59:11.544056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:47:36.865 [2024-07-13 11:59:11.544315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:36.865 [2024-07-13 11:59:11.544378] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:36.865 [2024-07-13 11:59:11.544606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:47:36.865 [2024-07-13 11:59:11.544649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:36.865 [2024-07-13 11:59:11.544685] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:36.865 [2024-07-13 11:59:11.544807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:47:36.865 [2024-07-13 11:59:11.544850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:36.865 [2024-07-13 11:59:11.544886] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:47:36.865 [2024-07-13 11:59:11.545060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:47:36.865 [2024-07-13 11:59:11.545189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:47:37.144 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 00:47:37.144 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:47:37.144 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:47:37.144 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:37.144 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:37.144 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:37.144 11:59:11 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:37.144 11:59:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:37.416 11:59:11 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:37.416 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:47:37.416 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:47:37.416 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:47:37.416 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:47:37.416 11:59:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:47:37.416 11:59:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:47:37.416 11:59:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:47:37.416 11:59:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@715 -- # time=108.67 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@716 -- # echo 108.67 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@718 -- # return 0 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=108.67 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 108.67 1 00:47:43.976 remove_attach_helper took 108.67s to complete (handling 1 nvme drive(s)) 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:47:43.976 11:59:18 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 175494 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@948 -- # '[' -z 175494 ']' 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@952 -- # kill -0 175494 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@953 -- # uname 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 175494 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:43.976 killing process with pid 175494 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@966 -- # echo 'killing process with pid 175494' 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@967 -- # kill 175494 00:47:43.976 11:59:18 sw_hotplug -- common/autotest_common.sh@972 -- # wait 175494 00:47:45.352 11:59:19 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:47:45.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:47:45.612 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:47:46.548 00:47:46.548 real 3m24.335s 00:47:46.548 user 3m11.328s 00:47:46.548 sys 0m16.273s 00:47:46.548 11:59:21 sw_hotplug -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:46.548 11:59:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:47:46.548 ************************************ 00:47:46.548 END TEST sw_hotplug 00:47:46.548 ************************************ 00:47:46.808 11:59:21 -- common/autotest_common.sh@1142 -- # return 0 00:47:46.808 11:59:21 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:47:46.808 11:59:21 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@260 -- # timing_exit lib 00:47:46.808 11:59:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:47:46.808 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:47:46.808 11:59:21 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:47:46.808 11:59:21 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:47:46.808 11:59:21 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:47:46.808 11:59:21 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:47:46.808 11:59:21 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:47:46.808 11:59:21 -- spdk/autotest.sh@376 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:47:46.808 11:59:21 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:47:46.808 11:59:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:46.808 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:47:46.808 ************************************ 00:47:46.808 START TEST blockdev_raid5f 00:47:46.808 ************************************ 00:47:46.808 11:59:21 blockdev_raid5f -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:47:46.808 * Looking for test storage... 00:47:46.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@674 -- # uname -s 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@683 -- # crypto_device= 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@684 -- # dek= 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@685 -- # env_ctx= 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=178817 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 178817 00:47:46.808 11:59:21 blockdev_raid5f -- common/autotest_common.sh@829 -- # '[' -z 178817 ']' 00:47:46.808 11:59:21 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:47:46.808 11:59:21 blockdev_raid5f -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:46.808 11:59:21 blockdev_raid5f -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:46.808 11:59:21 blockdev_raid5f -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:46.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:46.808 11:59:21 blockdev_raid5f -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:46.808 11:59:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:46.808 [2024-07-13 11:59:21.551049] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:47:46.808 [2024-07-13 11:59:21.551290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178817 ] 00:47:47.067 [2024-07-13 11:59:21.720552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:47.325 [2024-07-13 11:59:21.911003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:47.892 11:59:22 blockdev_raid5f -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:47.892 11:59:22 blockdev_raid5f -- common/autotest_common.sh@862 -- # return 0 00:47:47.892 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:47:47.892 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:47:47.892 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@280 -- # rpc_cmd 00:47:47.892 11:59:22 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:47.892 11:59:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:47.892 Malloc0 00:47:48.151 Malloc1 00:47:48.151 Malloc2 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@740 -- # cat 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@749 -- # jq -r .name 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7469823f-843e-48c8-be59-846fbdfa2066"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7469823f-843e-48c8-be59-846fbdfa2066",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7469823f-843e-48c8-be59-846fbdfa2066",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1f99d468-f23f-41b7-a0be-304259df5508",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3d4333f2-4542-4e09-800c-edc6f950c5ff",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "18a211f4-1a90-4e58-a54c-120ed131e522",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:47:48.151 11:59:22 blockdev_raid5f -- bdev/blockdev.sh@754 -- # killprocess 178817 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@948 -- # '[' -z 178817 ']' 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@952 -- # kill -0 178817 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@953 -- # uname 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178817 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178817' 00:47:48.151 killing process with pid 178817 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@967 -- # kill 178817 00:47:48.151 11:59:22 blockdev_raid5f -- common/autotest_common.sh@972 -- # wait 178817 00:47:50.682 11:59:24 blockdev_raid5f -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:47:50.682 11:59:24 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:47:50.682 11:59:24 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:47:50.682 11:59:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:50.682 11:59:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:50.682 ************************************ 00:47:50.682 START TEST bdev_hello_world 00:47:50.682 ************************************ 00:47:50.682 11:59:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:47:50.682 [2024-07-13 11:59:24.911530] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:47:50.682 [2024-07-13 11:59:24.911781] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178875 ] 00:47:50.682 [2024-07-13 11:59:25.081370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:50.682 [2024-07-13 11:59:25.260556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:51.248 [2024-07-13 11:59:25.703472] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:47:51.248 [2024-07-13 11:59:25.703562] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:47:51.248 [2024-07-13 11:59:25.703619] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:47:51.248 [2024-07-13 11:59:25.704126] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:47:51.248 [2024-07-13 11:59:25.704286] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:47:51.248 [2024-07-13 11:59:25.704316] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:47:51.248 [2024-07-13 11:59:25.704409] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:47:51.248 00:47:51.248 [2024-07-13 11:59:25.704458] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:47:52.182 ************************************ 00:47:52.182 END TEST bdev_hello_world 00:47:52.182 ************************************ 00:47:52.182 00:47:52.182 real 0m1.967s 00:47:52.182 user 0m1.556s 00:47:52.182 sys 0m0.296s 00:47:52.182 11:59:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:52.182 11:59:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:47:52.182 11:59:26 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:47:52.182 11:59:26 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:47:52.182 11:59:26 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:47:52.182 11:59:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:52.182 11:59:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:52.182 ************************************ 00:47:52.182 START TEST bdev_bounds 00:47:52.182 ************************************ 00:47:52.182 Process bdevio pid: 178938 00:47:52.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=178938 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 178938' 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 178938 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 178938 ']' 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:52.182 11:59:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:47:52.182 [2024-07-13 11:59:26.932898] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:47:52.182 [2024-07-13 11:59:26.933667] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178938 ] 00:47:52.440 [2024-07-13 11:59:27.111047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:47:52.698 [2024-07-13 11:59:27.291631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:47:52.698 [2024-07-13 11:59:27.291762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:52.698 [2024-07-13 11:59:27.291764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:47:53.264 11:59:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:53.264 11:59:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:47:53.264 11:59:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:47:53.264 I/O targets: 00:47:53.264 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:47:53.264 00:47:53.264 00:47:53.264 CUnit - A unit testing framework for C - Version 2.1-3 00:47:53.264 http://cunit.sourceforge.net/ 00:47:53.264 00:47:53.264 00:47:53.264 Suite: bdevio tests on: raid5f 00:47:53.264 Test: blockdev write read block ...passed 00:47:53.264 Test: blockdev write zeroes read block ...passed 00:47:53.264 Test: blockdev write zeroes read no split ...passed 00:47:53.264 Test: blockdev write zeroes read split ...passed 00:47:53.523 Test: blockdev write zeroes read split partial ...passed 00:47:53.523 Test: blockdev reset ...passed 00:47:53.523 Test: blockdev write read 8 blocks ...passed 00:47:53.523 Test: blockdev write read size > 128k ...passed 00:47:53.523 Test: blockdev write read invalid size ...passed 00:47:53.523 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:47:53.523 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:47:53.523 Test: blockdev write read max offset ...passed 00:47:53.523 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:47:53.523 Test: blockdev writev readv 8 blocks ...passed 00:47:53.523 Test: blockdev writev readv 30 x 1block ...passed 00:47:53.523 Test: blockdev writev readv block ...passed 00:47:53.523 Test: blockdev writev readv size > 128k ...passed 00:47:53.523 Test: blockdev writev readv size > 128k in two iovs ...passed 00:47:53.523 Test: blockdev comparev and writev ...passed 00:47:53.523 Test: blockdev nvme passthru rw ...passed 00:47:53.523 Test: blockdev nvme passthru vendor specific ...passed 00:47:53.523 Test: blockdev nvme admin passthru ...passed 00:47:53.523 Test: blockdev copy ...passed 00:47:53.523 00:47:53.523 Run Summary: Type Total Ran Passed Failed Inactive 00:47:53.523 suites 1 1 n/a 0 0 00:47:53.523 tests 23 23 23 0 0 00:47:53.523 asserts 130 130 130 0 n/a 00:47:53.523 00:47:53.523 Elapsed time = 0.438 seconds 00:47:53.523 0 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 178938 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 178938 ']' 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 178938 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178938 00:47:53.523 killing process with pid 178938 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178938' 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@967 -- # kill 178938 00:47:53.523 11:59:28 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # wait 178938 00:47:54.899 11:59:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:47:54.899 00:47:54.899 real 0m2.382s 00:47:54.899 user 0m5.500s 00:47:54.899 sys 0m0.400s 00:47:54.899 11:59:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:47:54.899 ************************************ 00:47:54.899 END TEST bdev_bounds 00:47:54.899 ************************************ 00:47:54.899 11:59:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:47:54.899 11:59:29 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:47:54.899 11:59:29 blockdev_raid5f -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:47:54.899 11:59:29 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:47:54.899 11:59:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:47:54.899 11:59:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:47:54.899 ************************************ 00:47:54.899 START TEST bdev_nbd 00:47:54.899 ************************************ 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=($2) 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=(/dev/nbd+([0-9])) 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=179008 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 179008 /var/tmp/spdk-nbd.sock 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 179008 ']' 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:47:54.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:47:54.899 11:59:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:47:54.899 [2024-07-13 11:59:29.355532] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:47:54.899 [2024-07-13 11:59:29.355883] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:47:54.899 [2024-07-13 11:59:29.500733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:55.157 [2024-07-13 11:59:29.668381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:47:55.723 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:47:55.723 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:47:55.724 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:55.982 1+0 records in 00:47:55.982 1+0 records out 00:47:55.982 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299403 s, 13.7 MB/s 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:47:55.982 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:47:56.241 { 00:47:56.241 "nbd_device": "/dev/nbd0", 00:47:56.241 "bdev_name": "raid5f" 00:47:56.241 } 00:47:56.241 ]' 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:47:56.241 { 00:47:56.241 "nbd_device": "/dev/nbd0", 00:47:56.241 "bdev_name": "raid5f" 00:47:56.241 } 00:47:56.241 ]' 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:56.241 11:59:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:56.499 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:47:56.758 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:47:57.016 /dev/nbd0 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:47:57.016 1+0 records in 00:47:57.016 1+0 records out 00:47:57.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178754 s, 22.9 MB/s 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:57.016 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:47:57.275 { 00:47:57.275 "nbd_device": "/dev/nbd0", 00:47:57.275 "bdev_name": "raid5f" 00:47:57.275 } 00:47:57.275 ]' 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:47:57.275 { 00:47:57.275 "nbd_device": "/dev/nbd0", 00:47:57.275 "bdev_name": "raid5f" 00:47:57.275 } 00:47:57.275 ]' 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:47:57.275 256+0 records in 00:47:57.275 256+0 records out 00:47:57.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00739993 s, 142 MB/s 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:47:57.275 11:59:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:47:57.275 256+0 records in 00:47:57.275 256+0 records out 00:47:57.275 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274292 s, 38.2 MB/s 00:47:57.275 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:47:57.275 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:47:57.275 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:47:57.275 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:47:57.275 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:57.275 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:47:57.275 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:47:57.275 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:47:57.275 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:57.534 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:47:57.792 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:47:57.792 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:57.792 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:57.792 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:57.792 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:57.792 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:47:57.792 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:57.792 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:47:57.792 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:47:58.049 malloc_lvol_verify 00:47:58.049 11:59:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:47:58.307 195a998c-445a-4281-bcb2-6f6bd56bd5c7 00:47:58.307 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:47:58.564 389bffe7-f0ac-4349-aaa9-3adb8841c345 00:47:58.564 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:47:58.822 /dev/nbd0 00:47:58.822 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:47:58.822 Creating filesystem with 1024 4k blocks and 1024 inodes 00:47:58.822 00:47:58.822 Allocating group tables: 0/1 done 00:47:58.822 mke2fs 1.45.5 (07-Jan-2020) 00:47:58.822 00:47:58.822 Filesystem too small for a journal 00:47:58.822 Writing inode tables: 0/1 done 00:47:58.822 Writing superblocks and filesystem accounting information: 0/1 done 00:47:58.822 00:47:58.822 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:47:58.822 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:47:58.822 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:47:58.822 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:47:58.823 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:47:58.823 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:47:58.823 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:47:58.823 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 179008 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 179008 ']' 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 179008 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:47:59.080 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 179008 00:47:59.338 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:47:59.338 killing process with pid 179008 00:47:59.338 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:47:59.338 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 179008' 00:47:59.338 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@967 -- # kill 179008 00:47:59.338 11:59:33 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # wait 179008 00:48:00.274 11:59:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:48:00.274 00:48:00.274 real 0m5.668s 00:48:00.274 user 0m8.038s 00:48:00.274 sys 0m1.025s 00:48:00.274 11:59:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:00.274 11:59:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:48:00.274 ************************************ 00:48:00.274 END TEST bdev_nbd 00:48:00.274 ************************************ 00:48:00.274 11:59:35 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:00.274 11:59:35 blockdev_raid5f -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:48:00.274 11:59:35 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:48:00.274 11:59:35 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:48:00.274 11:59:35 blockdev_raid5f -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:48:00.274 11:59:35 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:48:00.274 11:59:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:00.274 11:59:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:00.274 ************************************ 00:48:00.274 START TEST bdev_fio 00:48:00.274 ************************************ 00:48:00.274 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:48:00.274 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:48:00.274 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:48:00.274 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:48:00.274 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:48:00.533 ************************************ 00:48:00.533 START TEST bdev_fio_rw_verify 00:48:00.533 ************************************ 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=(libasan libclang_rt.asan) 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:48:00.533 11:59:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:48:00.533 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:48:00.533 fio-3.35 00:48:00.533 Starting 1 thread 00:48:12.733 00:48:12.733 job_raid5f: (groupid=0, jobs=1): err= 0: pid=179246: Sat Jul 13 11:59:46 2024 00:48:12.733 read: IOPS=11.5k, BW=44.9MiB/s (47.0MB/s)(449MiB/10001msec) 00:48:12.733 slat (usec): min=19, max=1806, avg=20.96, stdev= 6.46 00:48:12.733 clat (usec): min=12, max=902, avg=140.83, stdev=52.57 00:48:12.733 lat (usec): min=34, max=2035, avg=161.79, stdev=53.94 00:48:12.733 clat percentiles (usec): 00:48:12.733 | 50.000th=[ 147], 99.000th=[ 245], 99.900th=[ 437], 99.990th=[ 701], 00:48:12.733 | 99.999th=[ 840] 00:48:12.733 write: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(463MiB/9865msec); 0 zone resets 00:48:12.733 slat (usec): min=9, max=216, avg=17.68, stdev= 3.83 00:48:12.733 clat (usec): min=62, max=1366, avg=318.60, stdev=47.95 00:48:12.733 lat (usec): min=79, max=1471, avg=336.28, stdev=49.61 00:48:12.733 clat percentiles (usec): 00:48:12.733 | 50.000th=[ 322], 99.000th=[ 445], 99.900th=[ 799], 99.990th=[ 1237], 00:48:12.733 | 99.999th=[ 1369] 00:48:12.733 bw ( KiB/s): min=42328, max=50848, per=98.91%, avg=47493.05, stdev=2049.95, samples=19 00:48:12.733 iops : min=10582, max=12712, avg=11873.26, stdev=512.49, samples=19 00:48:12.733 lat (usec) : 20=0.01%, 50=0.01%, 100=12.02%, 250=39.88%, 500=47.80% 00:48:12.733 lat (usec) : 750=0.23%, 1000=0.04% 00:48:12.733 lat (msec) : 2=0.02% 00:48:12.733 cpu : usr=99.67%, sys=0.13%, ctx=599, majf=0, minf=8130 00:48:12.733 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:48:12.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:12.733 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:48:12.733 issued rwts: total=114848,118423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:48:12.733 latency : target=0, window=0, percentile=100.00%, depth=8 00:48:12.733 00:48:12.733 Run status group 0 (all jobs): 00:48:12.733 READ: bw=44.9MiB/s (47.0MB/s), 44.9MiB/s-44.9MiB/s (47.0MB/s-47.0MB/s), io=449MiB (470MB), run=10001-10001msec 00:48:12.733 WRITE: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=463MiB (485MB), run=9865-9865msec 00:48:12.992 ----------------------------------------------------- 00:48:12.992 Suppressions used: 00:48:12.992 count bytes template 00:48:12.992 1 7 /usr/src/fio/parse.c 00:48:12.992 21 2016 /usr/src/fio/iolog.c 00:48:12.992 2 596 libcrypto.so 00:48:12.992 ----------------------------------------------------- 00:48:12.992 00:48:12.992 00:48:12.992 real 0m12.476s 00:48:12.992 user 0m13.201s 00:48:12.992 sys 0m0.611s 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:48:12.992 ************************************ 00:48:12.992 END TEST bdev_fio_rw_verify 00:48:12.992 ************************************ 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "7469823f-843e-48c8-be59-846fbdfa2066"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "7469823f-843e-48c8-be59-846fbdfa2066",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "7469823f-843e-48c8-be59-846fbdfa2066",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "1f99d468-f23f-41b7-a0be-304259df5508",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "3d4333f2-4542-4e09-800c-edc6f950c5ff",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "18a211f4-1a90-4e58-a54c-120ed131e522",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:48:12.992 /home/vagrant/spdk_repo/spdk 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:48:12.992 00:48:12.992 real 0m12.658s 00:48:12.992 user 0m13.323s 00:48:12.992 sys 0m0.667s 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:12.992 11:59:47 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:48:12.992 ************************************ 00:48:12.992 END TEST bdev_fio 00:48:12.992 ************************************ 00:48:12.992 11:59:47 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:12.992 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:48:12.992 11:59:47 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:48:12.992 11:59:47 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:48:12.992 11:59:47 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:12.993 11:59:47 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:12.993 ************************************ 00:48:12.993 START TEST bdev_verify 00:48:12.993 ************************************ 00:48:12.993 11:59:47 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:48:13.251 [2024-07-13 11:59:47.810693] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:48:13.251 [2024-07-13 11:59:47.810926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179452 ] 00:48:13.251 [2024-07-13 11:59:48.001488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:13.818 [2024-07-13 11:59:48.264729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:13.819 [2024-07-13 11:59:48.264737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:14.076 Running I/O for 5 seconds... 00:48:19.415 00:48:19.415 Latency(us) 00:48:19.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:19.415 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:48:19.415 Verification LBA range: start 0x0 length 0x2000 00:48:19.415 raid5f : 5.01 5707.40 22.29 0.00 0.00 34208.90 222.49 28597.53 00:48:19.415 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:48:19.415 Verification LBA range: start 0x2000 length 0x2000 00:48:19.415 raid5f : 5.01 5748.77 22.46 0.00 0.00 33710.84 210.39 28597.53 00:48:19.415 =================================================================================================================== 00:48:19.415 Total : 11456.17 44.75 0.00 0.00 33958.89 210.39 28597.53 00:48:20.351 00:48:20.351 real 0m7.275s 00:48:20.351 user 0m13.188s 00:48:20.351 sys 0m0.315s 00:48:20.351 11:59:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:20.351 ************************************ 00:48:20.351 11:59:55 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:48:20.351 END TEST bdev_verify 00:48:20.351 ************************************ 00:48:20.351 11:59:55 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:20.351 11:59:55 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:48:20.351 11:59:55 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:48:20.351 11:59:55 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:20.351 11:59:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:20.351 ************************************ 00:48:20.351 START TEST bdev_verify_big_io 00:48:20.351 ************************************ 00:48:20.351 11:59:55 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:48:20.610 [2024-07-13 11:59:55.141081] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:48:20.610 [2024-07-13 11:59:55.141319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179549 ] 00:48:20.610 [2024-07-13 11:59:55.313469] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:48:20.869 [2024-07-13 11:59:55.508468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:48:20.869 [2024-07-13 11:59:55.508483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:21.437 Running I/O for 5 seconds... 00:48:26.711 00:48:26.711 Latency(us) 00:48:26.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:26.711 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:48:26.711 Verification LBA range: start 0x0 length 0x200 00:48:26.711 raid5f : 5.22 424.95 26.56 0.00 0.00 7556621.95 353.75 316479.30 00:48:26.711 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:48:26.711 Verification LBA range: start 0x200 length 0x200 00:48:26.711 raid5f : 5.15 418.78 26.17 0.00 0.00 7544361.89 178.73 314572.80 00:48:26.711 =================================================================================================================== 00:48:26.711 Total : 843.73 52.73 0.00 0.00 7550578.77 178.73 316479.30 00:48:28.084 00:48:28.084 real 0m7.407s 00:48:28.084 user 0m13.486s 00:48:28.084 sys 0m0.381s 00:48:28.084 12:00:02 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:28.084 12:00:02 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:48:28.084 ************************************ 00:48:28.084 END TEST bdev_verify_big_io 00:48:28.084 ************************************ 00:48:28.084 12:00:02 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:28.084 12:00:02 blockdev_raid5f -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:28.084 12:00:02 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:48:28.084 12:00:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:28.084 12:00:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:28.084 ************************************ 00:48:28.084 START TEST bdev_write_zeroes 00:48:28.084 ************************************ 00:48:28.084 12:00:02 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:28.084 [2024-07-13 12:00:02.593943] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:48:28.084 [2024-07-13 12:00:02.594392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179670 ] 00:48:28.084 [2024-07-13 12:00:02.762008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:28.341 [2024-07-13 12:00:02.965815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:28.997 Running I/O for 1 seconds... 00:48:29.937 00:48:29.937 Latency(us) 00:48:29.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:48:29.937 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:48:29.937 raid5f : 1.01 26382.86 103.06 0.00 0.00 4835.86 1429.88 6255.71 00:48:29.937 =================================================================================================================== 00:48:29.937 Total : 26382.86 103.06 0.00 0.00 4835.86 1429.88 6255.71 00:48:31.317 00:48:31.317 real 0m3.180s 00:48:31.317 user 0m2.752s 00:48:31.317 sys 0m0.309s 00:48:31.317 12:00:05 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:31.317 12:00:05 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:48:31.317 ************************************ 00:48:31.317 END TEST bdev_write_zeroes 00:48:31.317 ************************************ 00:48:31.317 12:00:05 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 0 00:48:31.318 12:00:05 blockdev_raid5f -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:31.318 12:00:05 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:48:31.318 12:00:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:31.318 12:00:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:31.318 ************************************ 00:48:31.318 START TEST bdev_json_nonenclosed 00:48:31.318 ************************************ 00:48:31.318 12:00:05 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:31.318 [2024-07-13 12:00:05.838100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:48:31.318 [2024-07-13 12:00:05.838466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179727 ] 00:48:31.318 [2024-07-13 12:00:06.006069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:31.575 [2024-07-13 12:00:06.233251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:31.575 [2024-07-13 12:00:06.233376] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:48:31.575 [2024-07-13 12:00:06.233434] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:31.575 [2024-07-13 12:00:06.233468] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:31.833 00:48:31.833 real 0m0.807s 00:48:31.833 user 0m0.563s 00:48:31.833 sys 0m0.144s 00:48:31.833 12:00:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:48:31.833 12:00:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:31.833 12:00:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:48:31.833 ************************************ 00:48:31.833 END TEST bdev_json_nonenclosed 00:48:31.833 ************************************ 00:48:32.091 12:00:06 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:48:32.091 12:00:06 blockdev_raid5f -- bdev/blockdev.sh@782 -- # true 00:48:32.091 12:00:06 blockdev_raid5f -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:32.091 12:00:06 blockdev_raid5f -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:48:32.091 12:00:06 blockdev_raid5f -- common/autotest_common.sh@1105 -- # xtrace_disable 00:48:32.091 12:00:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:32.091 ************************************ 00:48:32.091 START TEST bdev_json_nonarray 00:48:32.091 ************************************ 00:48:32.091 12:00:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:48:32.091 [2024-07-13 12:00:06.706582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:48:32.091 [2024-07-13 12:00:06.707017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179759 ] 00:48:32.349 [2024-07-13 12:00:06.877309] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:48:32.349 [2024-07-13 12:00:07.075026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:48:32.349 [2024-07-13 12:00:07.075173] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:48:32.349 [2024-07-13 12:00:07.075233] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:48:32.349 [2024-07-13 12:00:07.075260] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:48:32.915 00:48:32.915 real 0m0.786s 00:48:32.915 user 0m0.546s 00:48:32.915 sys 0m0.140s 00:48:32.915 12:00:07 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:48:32.915 ************************************ 00:48:32.915 END TEST bdev_json_nonarray 00:48:32.915 ************************************ 00:48:32.915 12:00:07 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:32.915 12:00:07 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:48:32.915 12:00:07 blockdev_raid5f -- common/autotest_common.sh@1142 -- # return 234 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@785 -- # true 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@811 -- # cleanup 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:48:32.915 12:00:07 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:48:32.915 00:48:32.915 real 0m46.089s 00:48:32.915 user 1m2.726s 00:48:32.915 sys 0m4.418s 00:48:32.915 12:00:07 blockdev_raid5f -- common/autotest_common.sh@1124 -- # xtrace_disable 00:48:32.915 12:00:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:48:32.915 ************************************ 00:48:32.915 END TEST blockdev_raid5f 00:48:32.915 ************************************ 00:48:32.915 12:00:07 -- common/autotest_common.sh@1142 -- # return 0 00:48:32.915 12:00:07 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:48:32.915 12:00:07 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:48:32.915 12:00:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:48:32.915 12:00:07 -- common/autotest_common.sh@10 -- # set +x 00:48:32.915 12:00:07 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:48:32.915 12:00:07 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:48:32.915 12:00:07 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:48:32.915 12:00:07 -- common/autotest_common.sh@10 -- # set +x 00:48:34.815 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:34.815 Waiting for block devices as requested 00:48:34.815 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:48:35.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:48:35.073 Cleaning 00:48:35.073 Removing: /var/run/dpdk/spdk0/config 00:48:35.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:48:35.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:48:35.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:48:35.073 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:48:35.073 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:48:35.073 Removing: /var/run/dpdk/spdk0/hugepage_info 00:48:35.073 Removing: /dev/shm/spdk_tgt_trace.pid110879 00:48:35.073 Removing: /var/run/dpdk/spdk0 00:48:35.073 Removing: /var/run/dpdk/spdk_pid110623 00:48:35.073 Removing: /var/run/dpdk/spdk_pid110879 00:48:35.073 Removing: /var/run/dpdk/spdk_pid111135 00:48:35.073 Removing: /var/run/dpdk/spdk_pid111260 00:48:35.073 Removing: /var/run/dpdk/spdk_pid111312 00:48:35.073 Removing: /var/run/dpdk/spdk_pid111466 00:48:35.073 Removing: /var/run/dpdk/spdk_pid111489 00:48:35.073 Removing: /var/run/dpdk/spdk_pid111652 00:48:35.073 Removing: /var/run/dpdk/spdk_pid111923 00:48:35.073 Removing: /var/run/dpdk/spdk_pid112117 00:48:35.073 Removing: /var/run/dpdk/spdk_pid112232 00:48:35.073 Removing: /var/run/dpdk/spdk_pid112355 00:48:35.073 Removing: /var/run/dpdk/spdk_pid112483 00:48:35.073 Removing: /var/run/dpdk/spdk_pid112590 00:48:35.073 Removing: /var/run/dpdk/spdk_pid112661 00:48:35.073 Removing: /var/run/dpdk/spdk_pid112699 00:48:35.332 Removing: /var/run/dpdk/spdk_pid112777 00:48:35.332 Removing: /var/run/dpdk/spdk_pid112900 00:48:35.332 Removing: /var/run/dpdk/spdk_pid113461 00:48:35.332 Removing: /var/run/dpdk/spdk_pid113565 00:48:35.332 Removing: /var/run/dpdk/spdk_pid113636 00:48:35.332 Removing: /var/run/dpdk/spdk_pid113657 00:48:35.332 Removing: /var/run/dpdk/spdk_pid113824 00:48:35.332 Removing: /var/run/dpdk/spdk_pid113845 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114014 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114035 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114113 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114136 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114205 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114229 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114439 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114484 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114532 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114612 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114722 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114761 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114858 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114909 00:48:35.332 Removing: /var/run/dpdk/spdk_pid114967 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115038 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115094 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115152 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115203 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115281 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115332 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115388 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115444 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115513 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115571 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115622 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115686 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115757 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115808 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115869 00:48:35.332 Removing: /var/run/dpdk/spdk_pid115936 00:48:35.332 Removing: /var/run/dpdk/spdk_pid116006 00:48:35.332 Removing: /var/run/dpdk/spdk_pid116057 00:48:35.332 Removing: /var/run/dpdk/spdk_pid116151 00:48:35.332 Removing: /var/run/dpdk/spdk_pid116278 00:48:35.332 Removing: /var/run/dpdk/spdk_pid116479 00:48:35.332 Removing: /var/run/dpdk/spdk_pid116564 00:48:35.332 Removing: /var/run/dpdk/spdk_pid116651 00:48:35.332 Removing: /var/run/dpdk/spdk_pid117989 00:48:35.332 Removing: /var/run/dpdk/spdk_pid118226 00:48:35.332 Removing: /var/run/dpdk/spdk_pid118446 00:48:35.332 Removing: /var/run/dpdk/spdk_pid118592 00:48:35.332 Removing: /var/run/dpdk/spdk_pid118752 00:48:35.332 Removing: /var/run/dpdk/spdk_pid118838 00:48:35.332 Removing: /var/run/dpdk/spdk_pid118869 00:48:35.332 Removing: /var/run/dpdk/spdk_pid118907 00:48:35.332 Removing: /var/run/dpdk/spdk_pid119428 00:48:35.332 Removing: /var/run/dpdk/spdk_pid119522 00:48:35.332 Removing: /var/run/dpdk/spdk_pid119650 00:48:35.332 Removing: /var/run/dpdk/spdk_pid119708 00:48:35.332 Removing: /var/run/dpdk/spdk_pid121084 00:48:35.332 Removing: /var/run/dpdk/spdk_pid121471 00:48:35.332 Removing: /var/run/dpdk/spdk_pid121683 00:48:35.332 Removing: /var/run/dpdk/spdk_pid122682 00:48:35.332 Removing: /var/run/dpdk/spdk_pid123076 00:48:35.332 Removing: /var/run/dpdk/spdk_pid123285 00:48:35.332 Removing: /var/run/dpdk/spdk_pid124277 00:48:35.332 Removing: /var/run/dpdk/spdk_pid124822 00:48:35.332 Removing: /var/run/dpdk/spdk_pid125032 00:48:35.332 Removing: /var/run/dpdk/spdk_pid127283 00:48:35.332 Removing: /var/run/dpdk/spdk_pid127804 00:48:35.332 Removing: /var/run/dpdk/spdk_pid128008 00:48:35.332 Removing: /var/run/dpdk/spdk_pid130284 00:48:35.332 Removing: /var/run/dpdk/spdk_pid130798 00:48:35.332 Removing: /var/run/dpdk/spdk_pid130994 00:48:35.332 Removing: /var/run/dpdk/spdk_pid133249 00:48:35.332 Removing: /var/run/dpdk/spdk_pid134024 00:48:35.332 Removing: /var/run/dpdk/spdk_pid134246 00:48:35.332 Removing: /var/run/dpdk/spdk_pid136763 00:48:35.332 Removing: /var/run/dpdk/spdk_pid137345 00:48:35.332 Removing: /var/run/dpdk/spdk_pid137572 00:48:35.332 Removing: /var/run/dpdk/spdk_pid140073 00:48:35.332 Removing: /var/run/dpdk/spdk_pid140654 00:48:35.332 Removing: /var/run/dpdk/spdk_pid140879 00:48:35.332 Removing: /var/run/dpdk/spdk_pid143395 00:48:35.332 Removing: /var/run/dpdk/spdk_pid144311 00:48:35.332 Removing: /var/run/dpdk/spdk_pid144537 00:48:35.332 Removing: /var/run/dpdk/spdk_pid144766 00:48:35.332 Removing: /var/run/dpdk/spdk_pid145352 00:48:35.332 Removing: /var/run/dpdk/spdk_pid146366 00:48:35.332 Removing: /var/run/dpdk/spdk_pid146872 00:48:35.590 Removing: /var/run/dpdk/spdk_pid147786 00:48:35.590 Removing: /var/run/dpdk/spdk_pid148407 00:48:35.590 Removing: /var/run/dpdk/spdk_pid149430 00:48:35.590 Removing: /var/run/dpdk/spdk_pid149990 00:48:35.590 Removing: /var/run/dpdk/spdk_pid152981 00:48:35.590 Removing: /var/run/dpdk/spdk_pid153750 00:48:35.590 Removing: /var/run/dpdk/spdk_pid154418 00:48:35.590 Removing: /var/run/dpdk/spdk_pid157661 00:48:35.590 Removing: /var/run/dpdk/spdk_pid158540 00:48:35.590 Removing: /var/run/dpdk/spdk_pid159215 00:48:35.590 Removing: /var/run/dpdk/spdk_pid160662 00:48:35.590 Removing: /var/run/dpdk/spdk_pid161219 00:48:35.590 Removing: /var/run/dpdk/spdk_pid162552 00:48:35.590 Removing: /var/run/dpdk/spdk_pid163103 00:48:35.590 Removing: /var/run/dpdk/spdk_pid164439 00:48:35.590 Removing: /var/run/dpdk/spdk_pid164992 00:48:35.590 Removing: /var/run/dpdk/spdk_pid165859 00:48:35.590 Removing: /var/run/dpdk/spdk_pid165933 00:48:35.590 Removing: /var/run/dpdk/spdk_pid165984 00:48:35.590 Removing: /var/run/dpdk/spdk_pid166063 00:48:35.590 Removing: /var/run/dpdk/spdk_pid166194 00:48:35.590 Removing: /var/run/dpdk/spdk_pid166360 00:48:35.590 Removing: /var/run/dpdk/spdk_pid166580 00:48:35.590 Removing: /var/run/dpdk/spdk_pid166895 00:48:35.590 Removing: /var/run/dpdk/spdk_pid166912 00:48:35.590 Removing: /var/run/dpdk/spdk_pid166966 00:48:35.590 Removing: /var/run/dpdk/spdk_pid166992 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167020 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167071 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167095 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167123 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167151 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167182 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167225 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167253 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167284 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167305 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167340 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167370 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167407 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167435 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167462 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167494 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167544 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167581 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167623 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167709 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167757 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167777 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167822 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167867 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167889 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167946 00:48:35.590 Removing: /var/run/dpdk/spdk_pid167972 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168021 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168064 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168082 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168106 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168130 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168147 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168175 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168212 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168257 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168298 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168326 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168377 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168407 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168439 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168502 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168529 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168565 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168593 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168632 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168653 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168677 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168702 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168726 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168743 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168860 00:48:35.590 Removing: /var/run/dpdk/spdk_pid168947 00:48:35.590 Removing: /var/run/dpdk/spdk_pid169118 00:48:35.590 Removing: /var/run/dpdk/spdk_pid169147 00:48:35.590 Removing: /var/run/dpdk/spdk_pid169201 00:48:35.590 Removing: /var/run/dpdk/spdk_pid169281 00:48:35.590 Removing: /var/run/dpdk/spdk_pid169319 00:48:35.590 Removing: /var/run/dpdk/spdk_pid169348 00:48:35.590 Removing: /var/run/dpdk/spdk_pid169377 00:48:35.590 Removing: /var/run/dpdk/spdk_pid169437 00:48:35.849 Removing: /var/run/dpdk/spdk_pid169466 00:48:35.849 Removing: /var/run/dpdk/spdk_pid169550 00:48:35.849 Removing: /var/run/dpdk/spdk_pid169610 00:48:35.849 Removing: /var/run/dpdk/spdk_pid169668 00:48:35.849 Removing: /var/run/dpdk/spdk_pid169954 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170087 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170128 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170218 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170311 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170368 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170627 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170742 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170864 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170922 00:48:35.849 Removing: /var/run/dpdk/spdk_pid170954 00:48:35.849 Removing: /var/run/dpdk/spdk_pid171041 00:48:35.849 Removing: /var/run/dpdk/spdk_pid171586 00:48:35.849 Removing: /var/run/dpdk/spdk_pid171624 00:48:35.849 Removing: /var/run/dpdk/spdk_pid171963 00:48:35.849 Removing: /var/run/dpdk/spdk_pid172088 00:48:35.849 Removing: /var/run/dpdk/spdk_pid172209 00:48:35.849 Removing: /var/run/dpdk/spdk_pid172259 00:48:35.849 Removing: /var/run/dpdk/spdk_pid172298 00:48:35.849 Removing: /var/run/dpdk/spdk_pid172330 00:48:35.849 Removing: /var/run/dpdk/spdk_pid173731 00:48:35.849 Removing: /var/run/dpdk/spdk_pid173893 00:48:35.849 Removing: /var/run/dpdk/spdk_pid173897 00:48:35.849 Removing: /var/run/dpdk/spdk_pid173914 00:48:35.849 Removing: /var/run/dpdk/spdk_pid174412 00:48:35.849 Removing: /var/run/dpdk/spdk_pid174540 00:48:35.849 Removing: /var/run/dpdk/spdk_pid175494 00:48:35.849 Removing: /var/run/dpdk/spdk_pid178817 00:48:35.849 Removing: /var/run/dpdk/spdk_pid178875 00:48:35.849 Removing: /var/run/dpdk/spdk_pid178938 00:48:35.849 Removing: /var/run/dpdk/spdk_pid179232 00:48:35.849 Removing: /var/run/dpdk/spdk_pid179452 00:48:35.849 Removing: /var/run/dpdk/spdk_pid179549 00:48:35.849 Removing: /var/run/dpdk/spdk_pid179670 00:48:35.849 Removing: /var/run/dpdk/spdk_pid179727 00:48:35.849 Removing: /var/run/dpdk/spdk_pid179759 00:48:35.849 Clean 00:48:35.849 12:00:10 -- common/autotest_common.sh@1451 -- # return 0 00:48:35.849 12:00:10 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:48:35.849 12:00:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:35.849 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:48:36.106 12:00:10 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:48:36.106 12:00:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:48:36.106 12:00:10 -- common/autotest_common.sh@10 -- # set +x 00:48:36.106 12:00:10 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:36.106 12:00:10 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:48:36.106 12:00:10 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:48:36.106 12:00:10 -- spdk/autotest.sh@391 -- # hash lcov 00:48:36.106 12:00:10 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:48:36.106 12:00:10 -- spdk/autotest.sh@393 -- # hostname 00:48:36.106 12:00:10 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1712646987-2220 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:48:36.363 geninfo: WARNING: invalid characters removed from testname! 00:49:23.018 12:00:52 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:23.019 12:00:57 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:26.327 12:01:00 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:28.873 12:01:03 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:32.154 12:01:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:34.680 12:01:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:49:37.964 12:01:12 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:49:37.964 12:01:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:49:37.964 12:01:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:49:37.964 12:01:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:49:37.964 12:01:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:49:37.964 12:01:12 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:37.964 12:01:12 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:37.964 12:01:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:37.964 12:01:12 -- paths/export.sh@5 -- $ export PATH 00:49:37.964 12:01:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:49:37.964 12:01:12 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:49:37.964 12:01:12 -- common/autobuild_common.sh@444 -- $ date +%s 00:49:37.964 12:01:12 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720872072.XXXXXX 00:49:37.964 12:01:12 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720872072.nBPJA0 00:49:37.964 12:01:12 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:49:37.964 12:01:12 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:49:37.964 12:01:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:49:37.964 12:01:12 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:49:37.964 12:01:12 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:49:37.964 12:01:12 -- common/autobuild_common.sh@460 -- $ get_config_params 00:49:37.964 12:01:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:49:37.964 12:01:12 -- common/autotest_common.sh@10 -- $ set +x 00:49:37.964 12:01:12 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:49:37.964 12:01:12 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:49:37.964 12:01:12 -- pm/common@17 -- $ local monitor 00:49:37.964 12:01:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:37.964 12:01:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:37.964 12:01:12 -- pm/common@25 -- $ sleep 1 00:49:37.964 12:01:12 -- pm/common@21 -- $ date +%s 00:49:37.964 12:01:12 -- pm/common@21 -- $ date +%s 00:49:37.964 12:01:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720872072 00:49:37.964 12:01:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720872072 00:49:37.964 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720872072_collect-vmstat.pm.log 00:49:37.964 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720872072_collect-cpu-load.pm.log 00:49:38.529 12:01:13 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:49:38.529 12:01:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:49:38.529 12:01:13 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:49:38.529 12:01:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:49:38.529 12:01:13 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:49:38.529 12:01:13 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:49:38.529 12:01:13 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:49:38.529 12:01:13 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:49:38.529 12:01:13 -- common/autotest_common.sh@10 -- $ set +x 00:49:38.529 12:01:13 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:49:38.529 12:01:13 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:49:38.529 12:01:13 -- spdk/autopackage.sh@40 -- $ get_config_params 00:49:38.529 12:01:13 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:49:38.529 12:01:13 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:49:38.529 12:01:13 -- common/autotest_common.sh@10 -- $ set +x 00:49:38.529 12:01:13 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:49:38.529 12:01:13 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto --disable-unit-tests 00:49:38.529 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:49:38.529 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:49:39.094 Using 'verbs' RDMA provider 00:49:51.856 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:50:04.056 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:50:04.314 Creating mk/config.mk...done. 00:50:04.314 Creating mk/cc.flags.mk...done. 00:50:04.314 Type 'make' to build. 00:50:04.314 12:01:39 -- spdk/autopackage.sh@43 -- $ make -j10 00:50:04.572 make[1]: Nothing to be done for 'all'. 00:50:09.838 The Meson build system 00:50:09.838 Version: 1.4.0 00:50:09.838 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:50:09.838 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:50:09.838 Build type: native build 00:50:09.838 Program cat found: YES (/usr/bin/cat) 00:50:09.838 Project name: DPDK 00:50:09.838 Project version: 24.03.0 00:50:09.838 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0") 00:50:09.838 C linker for the host machine: cc ld.bfd 2.34 00:50:09.838 Host machine cpu family: x86_64 00:50:09.838 Host machine cpu: x86_64 00:50:09.838 Message: ## Building in Developer Mode ## 00:50:09.838 Program pkg-config found: YES (/usr/bin/pkg-config) 00:50:09.838 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:50:09.838 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:50:09.838 Program python3 found: YES (/usr/bin/python3) 00:50:09.838 Program cat found: YES (/usr/bin/cat) 00:50:09.838 Compiler for C supports arguments -march=native: YES 00:50:09.838 Checking for size of "void *" : 8 00:50:09.838 Checking for size of "void *" : 8 (cached) 00:50:09.838 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:50:09.838 Library m found: YES 00:50:09.838 Library numa found: YES 00:50:09.838 Has header "numaif.h" : YES 00:50:09.838 Library fdt found: NO 00:50:09.838 Library execinfo found: NO 00:50:09.838 Has header "execinfo.h" : YES 00:50:09.838 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.1 00:50:09.838 Run-time dependency libarchive found: NO (tried pkgconfig) 00:50:09.838 Run-time dependency libbsd found: NO (tried pkgconfig) 00:50:09.838 Run-time dependency jansson found: NO (tried pkgconfig) 00:50:09.838 Run-time dependency openssl found: YES 1.1.1f 00:50:09.838 Run-time dependency libpcap found: NO (tried pkgconfig) 00:50:09.838 Library pcap found: NO 00:50:09.838 Compiler for C supports arguments -Wcast-qual: YES 00:50:09.838 Compiler for C supports arguments -Wdeprecated: YES 00:50:09.838 Compiler for C supports arguments -Wformat: YES 00:50:09.838 Compiler for C supports arguments -Wformat-nonliteral: YES 00:50:09.838 Compiler for C supports arguments -Wformat-security: YES 00:50:09.838 Compiler for C supports arguments -Wmissing-declarations: YES 00:50:09.838 Compiler for C supports arguments -Wmissing-prototypes: YES 00:50:09.839 Compiler for C supports arguments -Wnested-externs: YES 00:50:09.839 Compiler for C supports arguments -Wold-style-definition: YES 00:50:09.839 Compiler for C supports arguments -Wpointer-arith: YES 00:50:09.839 Compiler for C supports arguments -Wsign-compare: YES 00:50:09.839 Compiler for C supports arguments -Wstrict-prototypes: YES 00:50:09.839 Compiler for C supports arguments -Wundef: YES 00:50:09.839 Compiler for C supports arguments -Wwrite-strings: YES 00:50:09.839 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:50:09.839 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:50:09.839 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:50:09.839 Program objdump found: YES (/usr/bin/objdump) 00:50:09.839 Compiler for C supports arguments -mavx512f: YES 00:50:09.839 Checking if "AVX512 checking" compiles: YES 00:50:09.839 Fetching value of define "__SSE4_2__" : 1 00:50:09.839 Fetching value of define "__AES__" : 1 00:50:09.839 Fetching value of define "__AVX__" : 1 00:50:09.839 Fetching value of define "__AVX2__" : 1 00:50:09.839 Fetching value of define "__AVX512BW__" : (undefined) 00:50:09.839 Fetching value of define "__AVX512CD__" : (undefined) 00:50:09.839 Fetching value of define "__AVX512DQ__" : (undefined) 00:50:09.839 Fetching value of define "__AVX512F__" : (undefined) 00:50:09.839 Fetching value of define "__AVX512VL__" : (undefined) 00:50:09.839 Fetching value of define "__PCLMUL__" : 1 00:50:09.839 Fetching value of define "__RDRND__" : 1 00:50:09.839 Fetching value of define "__RDSEED__" : 1 00:50:09.839 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:50:09.839 Fetching value of define "__znver1__" : (undefined) 00:50:09.839 Fetching value of define "__znver2__" : (undefined) 00:50:09.839 Fetching value of define "__znver3__" : (undefined) 00:50:09.839 Fetching value of define "__znver4__" : (undefined) 00:50:09.839 Compiler for C supports arguments -ffat-lto-objects: YES 00:50:09.839 Library asan found: YES 00:50:09.839 Compiler for C supports arguments -Wno-format-truncation: YES 00:50:09.839 Message: lib/log: Defining dependency "log" 00:50:09.839 Message: lib/kvargs: Defining dependency "kvargs" 00:50:09.839 Message: lib/telemetry: Defining dependency "telemetry" 00:50:09.839 Library rt found: YES 00:50:09.839 Checking for function "getentropy" : NO 00:50:09.839 Message: lib/eal: Defining dependency "eal" 00:50:09.839 Message: lib/ring: Defining dependency "ring" 00:50:09.839 Message: lib/rcu: Defining dependency "rcu" 00:50:09.839 Message: lib/mempool: Defining dependency "mempool" 00:50:09.839 Message: lib/mbuf: Defining dependency "mbuf" 00:50:09.839 Fetching value of define "__PCLMUL__" : 1 (cached) 00:50:09.839 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:50:09.839 Compiler for C supports arguments -mpclmul: YES 00:50:09.839 Compiler for C supports arguments -maes: YES 00:50:09.839 Compiler for C supports arguments -mavx512f: YES (cached) 00:50:09.839 Compiler for C supports arguments -mavx512bw: YES 00:50:09.839 Compiler for C supports arguments -mavx512dq: YES 00:50:09.839 Compiler for C supports arguments -mavx512vl: YES 00:50:09.839 Compiler for C supports arguments -mvpclmulqdq: YES 00:50:09.839 Compiler for C supports arguments -mavx2: YES 00:50:09.839 Compiler for C supports arguments -mavx: YES 00:50:09.839 Message: lib/net: Defining dependency "net" 00:50:09.839 Message: lib/meter: Defining dependency "meter" 00:50:09.839 Message: lib/ethdev: Defining dependency "ethdev" 00:50:09.839 Message: lib/pci: Defining dependency "pci" 00:50:09.839 Message: lib/cmdline: Defining dependency "cmdline" 00:50:09.839 Message: lib/hash: Defining dependency "hash" 00:50:09.839 Message: lib/timer: Defining dependency "timer" 00:50:09.839 Message: lib/compressdev: Defining dependency "compressdev" 00:50:09.839 Message: lib/cryptodev: Defining dependency "cryptodev" 00:50:09.839 Message: lib/dmadev: Defining dependency "dmadev" 00:50:09.839 Compiler for C supports arguments -Wno-cast-qual: YES 00:50:09.839 Message: lib/power: Defining dependency "power" 00:50:09.839 Message: lib/reorder: Defining dependency "reorder" 00:50:09.839 Message: lib/security: Defining dependency "security" 00:50:09.839 Has header "linux/userfaultfd.h" : YES 00:50:09.839 Has header "linux/vduse.h" : NO 00:50:09.839 Message: lib/vhost: Defining dependency "vhost" 00:50:09.839 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:50:09.839 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:50:09.839 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:50:09.839 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:50:09.839 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:50:09.839 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:50:09.839 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:50:09.839 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:50:09.839 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:50:09.839 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:50:09.839 Program doxygen found: YES (/usr/bin/doxygen) 00:50:09.839 Configuring doxy-api-html.conf using configuration 00:50:09.839 Configuring doxy-api-man.conf using configuration 00:50:09.839 Program mandb found: YES (/usr/bin/mandb) 00:50:09.839 Program sphinx-build found: NO 00:50:09.839 Configuring rte_build_config.h using configuration 00:50:09.839 Message: 00:50:09.839 ================= 00:50:09.839 Applications Enabled 00:50:09.839 ================= 00:50:09.839 00:50:09.839 apps: 00:50:09.839 00:50:09.839 00:50:09.839 Message: 00:50:09.839 ================= 00:50:09.839 Libraries Enabled 00:50:09.839 ================= 00:50:09.839 00:50:09.839 libs: 00:50:09.839 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:50:09.839 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:50:09.839 cryptodev, dmadev, power, reorder, security, vhost, 00:50:09.839 00:50:09.839 Message: 00:50:09.839 =============== 00:50:09.839 Drivers Enabled 00:50:09.839 =============== 00:50:09.839 00:50:09.839 common: 00:50:09.839 00:50:09.839 bus: 00:50:09.839 pci, vdev, 00:50:09.839 mempool: 00:50:09.839 ring, 00:50:09.839 dma: 00:50:09.839 00:50:09.839 net: 00:50:09.839 00:50:09.839 crypto: 00:50:09.839 00:50:09.839 compress: 00:50:09.839 00:50:09.839 vdpa: 00:50:09.839 00:50:09.839 00:50:09.839 Message: 00:50:09.839 ================= 00:50:09.839 Content Skipped 00:50:09.839 ================= 00:50:09.839 00:50:09.839 apps: 00:50:09.839 dumpcap: explicitly disabled via build config 00:50:09.839 graph: explicitly disabled via build config 00:50:09.839 pdump: explicitly disabled via build config 00:50:09.839 proc-info: explicitly disabled via build config 00:50:09.839 test-acl: explicitly disabled via build config 00:50:09.839 test-bbdev: explicitly disabled via build config 00:50:09.839 test-cmdline: explicitly disabled via build config 00:50:09.839 test-compress-perf: explicitly disabled via build config 00:50:09.839 test-crypto-perf: explicitly disabled via build config 00:50:09.839 test-dma-perf: explicitly disabled via build config 00:50:09.839 test-eventdev: explicitly disabled via build config 00:50:09.839 test-fib: explicitly disabled via build config 00:50:09.839 test-flow-perf: explicitly disabled via build config 00:50:09.839 test-gpudev: explicitly disabled via build config 00:50:09.839 test-mldev: explicitly disabled via build config 00:50:09.839 test-pipeline: explicitly disabled via build config 00:50:09.839 test-pmd: explicitly disabled via build config 00:50:09.839 test-regex: explicitly disabled via build config 00:50:09.839 test-sad: explicitly disabled via build config 00:50:09.839 test-security-perf: explicitly disabled via build config 00:50:09.839 00:50:09.839 libs: 00:50:09.839 argparse: explicitly disabled via build config 00:50:09.839 metrics: explicitly disabled via build config 00:50:09.839 acl: explicitly disabled via build config 00:50:09.839 bbdev: explicitly disabled via build config 00:50:09.839 bitratestats: explicitly disabled via build config 00:50:09.839 bpf: explicitly disabled via build config 00:50:09.839 cfgfile: explicitly disabled via build config 00:50:09.839 distributor: explicitly disabled via build config 00:50:09.839 efd: explicitly disabled via build config 00:50:09.839 eventdev: explicitly disabled via build config 00:50:09.839 dispatcher: explicitly disabled via build config 00:50:09.839 gpudev: explicitly disabled via build config 00:50:09.839 gro: explicitly disabled via build config 00:50:09.839 gso: explicitly disabled via build config 00:50:09.839 ip_frag: explicitly disabled via build config 00:50:09.839 jobstats: explicitly disabled via build config 00:50:09.839 latencystats: explicitly disabled via build config 00:50:09.839 lpm: explicitly disabled via build config 00:50:09.839 member: explicitly disabled via build config 00:50:09.839 pcapng: explicitly disabled via build config 00:50:09.839 rawdev: explicitly disabled via build config 00:50:09.839 regexdev: explicitly disabled via build config 00:50:09.839 mldev: explicitly disabled via build config 00:50:09.839 rib: explicitly disabled via build config 00:50:09.839 sched: explicitly disabled via build config 00:50:09.839 stack: explicitly disabled via build config 00:50:09.839 ipsec: explicitly disabled via build config 00:50:09.839 pdcp: explicitly disabled via build config 00:50:09.839 fib: explicitly disabled via build config 00:50:09.839 port: explicitly disabled via build config 00:50:09.839 pdump: explicitly disabled via build config 00:50:09.839 table: explicitly disabled via build config 00:50:09.839 pipeline: explicitly disabled via build config 00:50:09.839 graph: explicitly disabled via build config 00:50:09.839 node: explicitly disabled via build config 00:50:09.839 00:50:09.839 drivers: 00:50:09.839 common/cpt: not in enabled drivers build config 00:50:09.839 common/dpaax: not in enabled drivers build config 00:50:09.839 common/iavf: not in enabled drivers build config 00:50:09.839 common/idpf: not in enabled drivers build config 00:50:09.839 common/ionic: not in enabled drivers build config 00:50:09.839 common/mvep: not in enabled drivers build config 00:50:09.839 common/octeontx: not in enabled drivers build config 00:50:09.839 bus/auxiliary: not in enabled drivers build config 00:50:09.839 bus/cdx: not in enabled drivers build config 00:50:09.839 bus/dpaa: not in enabled drivers build config 00:50:09.839 bus/fslmc: not in enabled drivers build config 00:50:09.839 bus/ifpga: not in enabled drivers build config 00:50:09.839 bus/platform: not in enabled drivers build config 00:50:09.839 bus/uacce: not in enabled drivers build config 00:50:09.839 bus/vmbus: not in enabled drivers build config 00:50:09.839 common/cnxk: not in enabled drivers build config 00:50:09.839 common/mlx5: not in enabled drivers build config 00:50:09.839 common/nfp: not in enabled drivers build config 00:50:09.839 common/nitrox: not in enabled drivers build config 00:50:09.839 common/qat: not in enabled drivers build config 00:50:09.839 common/sfc_efx: not in enabled drivers build config 00:50:09.839 mempool/bucket: not in enabled drivers build config 00:50:09.839 mempool/cnxk: not in enabled drivers build config 00:50:09.839 mempool/dpaa: not in enabled drivers build config 00:50:09.839 mempool/dpaa2: not in enabled drivers build config 00:50:09.839 mempool/octeontx: not in enabled drivers build config 00:50:09.839 mempool/stack: not in enabled drivers build config 00:50:09.839 dma/cnxk: not in enabled drivers build config 00:50:09.839 dma/dpaa: not in enabled drivers build config 00:50:09.839 dma/dpaa2: not in enabled drivers build config 00:50:09.839 dma/hisilicon: not in enabled drivers build config 00:50:09.839 dma/idxd: not in enabled drivers build config 00:50:09.839 dma/ioat: not in enabled drivers build config 00:50:09.839 dma/skeleton: not in enabled drivers build config 00:50:09.839 net/af_packet: not in enabled drivers build config 00:50:09.839 net/af_xdp: not in enabled drivers build config 00:50:09.839 net/ark: not in enabled drivers build config 00:50:09.839 net/atlantic: not in enabled drivers build config 00:50:09.839 net/avp: not in enabled drivers build config 00:50:09.839 net/axgbe: not in enabled drivers build config 00:50:09.839 net/bnx2x: not in enabled drivers build config 00:50:09.839 net/bnxt: not in enabled drivers build config 00:50:09.839 net/bonding: not in enabled drivers build config 00:50:09.839 net/cnxk: not in enabled drivers build config 00:50:09.839 net/cpfl: not in enabled drivers build config 00:50:09.839 net/cxgbe: not in enabled drivers build config 00:50:09.839 net/dpaa: not in enabled drivers build config 00:50:09.839 net/dpaa2: not in enabled drivers build config 00:50:09.839 net/e1000: not in enabled drivers build config 00:50:09.839 net/ena: not in enabled drivers build config 00:50:09.839 net/enetc: not in enabled drivers build config 00:50:09.839 net/enetfec: not in enabled drivers build config 00:50:09.839 net/enic: not in enabled drivers build config 00:50:09.839 net/failsafe: not in enabled drivers build config 00:50:09.839 net/fm10k: not in enabled drivers build config 00:50:09.839 net/gve: not in enabled drivers build config 00:50:09.839 net/hinic: not in enabled drivers build config 00:50:09.839 net/hns3: not in enabled drivers build config 00:50:09.839 net/i40e: not in enabled drivers build config 00:50:09.839 net/iavf: not in enabled drivers build config 00:50:09.839 net/ice: not in enabled drivers build config 00:50:09.839 net/idpf: not in enabled drivers build config 00:50:09.839 net/igc: not in enabled drivers build config 00:50:09.839 net/ionic: not in enabled drivers build config 00:50:09.839 net/ipn3ke: not in enabled drivers build config 00:50:09.839 net/ixgbe: not in enabled drivers build config 00:50:09.839 net/mana: not in enabled drivers build config 00:50:09.839 net/memif: not in enabled drivers build config 00:50:09.839 net/mlx4: not in enabled drivers build config 00:50:09.839 net/mlx5: not in enabled drivers build config 00:50:09.839 net/mvneta: not in enabled drivers build config 00:50:09.839 net/mvpp2: not in enabled drivers build config 00:50:09.839 net/netvsc: not in enabled drivers build config 00:50:09.839 net/nfb: not in enabled drivers build config 00:50:09.839 net/nfp: not in enabled drivers build config 00:50:09.839 net/ngbe: not in enabled drivers build config 00:50:09.839 net/null: not in enabled drivers build config 00:50:09.839 net/octeontx: not in enabled drivers build config 00:50:09.839 net/octeon_ep: not in enabled drivers build config 00:50:09.839 net/pcap: not in enabled drivers build config 00:50:09.839 net/pfe: not in enabled drivers build config 00:50:09.839 net/qede: not in enabled drivers build config 00:50:09.839 net/ring: not in enabled drivers build config 00:50:09.839 net/sfc: not in enabled drivers build config 00:50:09.839 net/softnic: not in enabled drivers build config 00:50:09.839 net/tap: not in enabled drivers build config 00:50:09.839 net/thunderx: not in enabled drivers build config 00:50:09.839 net/txgbe: not in enabled drivers build config 00:50:09.839 net/vdev_netvsc: not in enabled drivers build config 00:50:09.839 net/vhost: not in enabled drivers build config 00:50:09.839 net/virtio: not in enabled drivers build config 00:50:09.839 net/vmxnet3: not in enabled drivers build config 00:50:09.839 raw/*: missing internal dependency, "rawdev" 00:50:09.839 crypto/armv8: not in enabled drivers build config 00:50:09.839 crypto/bcmfs: not in enabled drivers build config 00:50:09.839 crypto/caam_jr: not in enabled drivers build config 00:50:09.839 crypto/ccp: not in enabled drivers build config 00:50:09.839 crypto/cnxk: not in enabled drivers build config 00:50:09.839 crypto/dpaa_sec: not in enabled drivers build config 00:50:09.839 crypto/dpaa2_sec: not in enabled drivers build config 00:50:09.839 crypto/ipsec_mb: not in enabled drivers build config 00:50:09.839 crypto/mlx5: not in enabled drivers build config 00:50:09.839 crypto/mvsam: not in enabled drivers build config 00:50:09.839 crypto/nitrox: not in enabled drivers build config 00:50:09.839 crypto/null: not in enabled drivers build config 00:50:09.839 crypto/octeontx: not in enabled drivers build config 00:50:09.839 crypto/openssl: not in enabled drivers build config 00:50:09.839 crypto/scheduler: not in enabled drivers build config 00:50:09.839 crypto/uadk: not in enabled drivers build config 00:50:09.839 crypto/virtio: not in enabled drivers build config 00:50:09.839 compress/isal: not in enabled drivers build config 00:50:09.839 compress/mlx5: not in enabled drivers build config 00:50:09.840 compress/nitrox: not in enabled drivers build config 00:50:09.840 compress/octeontx: not in enabled drivers build config 00:50:09.840 compress/zlib: not in enabled drivers build config 00:50:09.840 regex/*: missing internal dependency, "regexdev" 00:50:09.840 ml/*: missing internal dependency, "mldev" 00:50:09.840 vdpa/ifc: not in enabled drivers build config 00:50:09.840 vdpa/mlx5: not in enabled drivers build config 00:50:09.840 vdpa/nfp: not in enabled drivers build config 00:50:09.840 vdpa/sfc: not in enabled drivers build config 00:50:09.840 event/*: missing internal dependency, "eventdev" 00:50:09.840 baseband/*: missing internal dependency, "bbdev" 00:50:09.840 gpu/*: missing internal dependency, "gpudev" 00:50:09.840 00:50:09.840 00:50:10.406 Build targets in project: 85 00:50:10.406 00:50:10.406 DPDK 24.03.0 00:50:10.406 00:50:10.406 User defined options 00:50:10.406 default_library : static 00:50:10.406 libdir : lib 00:50:10.406 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:50:10.406 b_lto : true 00:50:10.406 b_sanitize : address 00:50:10.406 c_args : -fPIC -Werror 00:50:10.406 c_link_args : 00:50:10.406 cpu_instruction_set: native 00:50:10.406 disable_apps : graph,dumpcap,test,test-gpudev,test-dma-perf,test-cmdline,test-compress-perf,pdump,test-fib,test-mldev,test-regex,proc-info,test-crypto-perf,test-pipeline,test-security-perf,test-acl,test-sad,test-pmd,test-flow-perf,test-bbdev,test-eventdev 00:50:10.406 disable_libs : gro,eventdev,lpm,efd,node,acl,bitratestats,port,graph,pipeline,pdcp,gpudev,ipsec,jobstats,dispatcher,mldev,pdump,gso,metrics,latencystats,bbdev,argparse,rawdev,stack,member,cfgfile,sched,pcapng,bpf,ip_frag,distributor,fib,regexdev,rib,table 00:50:10.406 enable_docs : false 00:50:10.406 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:50:10.407 enable_kmods : false 00:50:10.407 max_lcores : 128 00:50:10.407 tests : false 00:50:10.407 00:50:10.407 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:50:10.974 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:50:10.974 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:50:10.974 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:50:10.974 [3/267] Linking static target lib/librte_kvargs.a 00:50:10.974 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:50:10.974 [5/267] Linking static target lib/librte_log.a 00:50:10.974 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:50:11.233 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:50:11.233 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:50:11.233 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:50:11.233 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:50:11.233 [11/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:50:11.233 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:50:11.233 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:50:11.233 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:50:11.493 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:50:11.493 [16/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:50:11.493 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:50:11.493 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:50:11.752 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:50:11.752 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:50:11.752 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:50:11.752 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:50:11.752 [23/267] Linking target lib/librte_log.so.24.1 00:50:12.011 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:50:12.011 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:50:12.011 [26/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:50:12.011 [27/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:50:12.011 [28/267] Linking static target lib/librte_telemetry.a 00:50:12.011 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:50:12.011 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:50:12.011 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:50:12.011 [32/267] Linking target lib/librte_kvargs.so.24.1 00:50:12.011 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:50:12.270 [34/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:50:12.270 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:50:12.270 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:50:12.270 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:50:12.270 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:50:12.270 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:50:12.270 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:50:12.270 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:50:12.529 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:50:12.529 [43/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:50:12.529 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:50:12.788 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:50:12.788 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:50:12.788 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:50:12.788 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:50:12.788 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:50:13.047 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:50:13.047 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:50:13.047 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:50:13.047 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:50:13.047 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:50:13.306 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:50:13.306 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:50:13.306 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:50:13.306 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:50:13.306 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:50:13.306 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:50:13.306 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:50:13.306 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:50:13.565 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:50:13.565 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:50:13.565 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:50:13.565 [66/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:50:13.565 [67/267] Linking target lib/librte_telemetry.so.24.1 00:50:13.824 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:50:13.824 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:50:13.824 [70/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:50:13.824 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:50:13.824 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:50:13.824 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:50:13.824 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:50:13.824 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:50:13.824 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:50:14.083 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:50:14.083 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:50:14.083 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:50:14.342 [80/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:50:14.342 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:50:14.342 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:50:14.342 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:50:14.342 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:50:14.342 [85/267] Linking static target lib/librte_ring.a 00:50:14.601 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:50:14.601 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:50:14.601 [88/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:50:14.601 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:50:14.601 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:50:14.601 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:50:14.859 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:50:14.859 [93/267] Linking static target lib/librte_eal.a 00:50:14.859 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:50:14.859 [95/267] Linking static target lib/librte_mempool.a 00:50:15.117 [96/267] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:50:15.117 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:50:15.117 [98/267] Linking static target lib/net/libnet_crc_avx512_lib.a 00:50:15.117 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:50:15.117 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:50:15.117 [101/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:50:15.117 [102/267] Linking static target lib/librte_rcu.a 00:50:15.117 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:50:15.375 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:50:15.375 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:50:15.375 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:50:15.375 [107/267] Linking static target lib/librte_meter.a 00:50:15.375 [108/267] Linking static target lib/librte_net.a 00:50:15.375 [109/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:50:15.632 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:50:15.632 [111/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:50:15.632 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:50:15.632 [113/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:50:15.632 [114/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:50:15.891 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:50:15.891 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:50:16.148 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:50:16.148 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:50:16.405 [119/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:50:16.405 [120/267] Linking static target lib/librte_mbuf.a 00:50:16.405 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:50:16.662 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:50:16.662 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:50:16.662 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:50:16.920 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:50:16.920 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:50:16.920 [127/267] Linking static target lib/librte_pci.a 00:50:16.920 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:50:16.920 [129/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:50:16.920 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:50:16.920 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:50:16.920 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:50:17.178 [133/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:50:17.178 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:50:17.178 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:50:17.178 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:50:17.178 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:50:17.178 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:50:17.178 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:50:17.178 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:50:17.178 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:50:17.436 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:50:17.436 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:50:17.436 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:50:17.436 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:50:17.436 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:50:17.436 [147/267] Linking static target lib/librte_cmdline.a 00:50:17.695 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:50:17.695 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:50:17.954 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:50:17.954 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:50:17.954 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:50:17.954 [153/267] Linking static target lib/librte_timer.a 00:50:18.213 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:50:18.213 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:50:18.213 [156/267] Linking static target lib/librte_compressdev.a 00:50:18.213 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:50:18.471 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:50:18.471 [159/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:50:18.471 [160/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:50:18.471 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:50:18.471 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:50:18.729 [163/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:50:18.729 [164/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:18.729 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:50:18.729 [166/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:50:18.987 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:50:19.244 [168/267] Linking static target lib/librte_dmadev.a 00:50:19.244 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:50:19.244 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:50:19.244 [171/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:50:19.244 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:50:19.502 [173/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:50:19.502 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:50:19.502 [175/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:19.502 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:50:19.759 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:50:20.018 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:50:20.018 [179/267] Linking static target lib/librte_power.a 00:50:20.018 [180/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:50:20.018 [181/267] Linking static target lib/librte_reorder.a 00:50:20.018 [182/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:50:20.018 [183/267] Linking static target lib/librte_security.a 00:50:20.276 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:50:20.276 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:50:20.276 [186/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:50:20.534 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:50:20.534 [188/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:50:20.534 [189/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:50:20.797 [190/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:50:20.797 [191/267] Linking static target lib/librte_cryptodev.a 00:50:21.076 [192/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:50:21.076 [193/267] Linking static target lib/librte_ethdev.a 00:50:21.076 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:50:21.354 [195/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:50:21.354 [196/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:50:21.354 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:50:21.354 [198/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:50:21.626 [199/267] Linking static target lib/librte_hash.a 00:50:21.626 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:50:21.885 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:50:22.144 [202/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:50:22.144 [203/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:50:22.144 [204/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:22.144 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:50:22.144 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:50:22.712 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:50:22.712 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:50:22.712 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:50:22.712 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:50:22.712 [211/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:50:22.712 [212/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:50:22.712 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:50:22.712 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:50:22.712 [215/267] Linking static target drivers/librte_bus_vdev.a 00:50:22.712 [216/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:50:22.971 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:50:22.971 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:50:22.971 [219/267] Linking static target drivers/librte_bus_pci.a 00:50:22.971 [220/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:50:22.971 [221/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:50:22.971 [222/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:23.229 [223/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:50:23.229 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:50:23.229 [225/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:50:23.229 [226/267] Linking static target drivers/librte_mempool_ring.a 00:50:23.229 [227/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:50:27.420 [228/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:50:32.683 [229/267] Linking target lib/librte_eal.so.24.1 00:50:32.683 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:50:32.683 [231/267] Linking target lib/librte_meter.so.24.1 00:50:32.683 [232/267] Linking target lib/librte_pci.so.24.1 00:50:32.683 [233/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:50:32.683 [234/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:50:32.683 [235/267] Linking target lib/librte_ring.so.24.1 00:50:32.683 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:50:32.683 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:50:32.683 [238/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:50:32.683 [239/267] Linking target lib/librte_timer.so.24.1 00:50:32.683 [240/267] Linking target lib/librte_dmadev.so.24.1 00:50:32.683 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:50:32.941 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:50:33.507 [243/267] Linking target lib/librte_mempool.so.24.1 00:50:33.507 [244/267] Linking target lib/librte_rcu.so.24.1 00:50:33.507 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:50:33.507 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:50:33.765 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:50:34.023 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:50:35.397 [249/267] Linking target lib/librte_mbuf.so.24.1 00:50:35.397 [250/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:50:35.656 [251/267] Linking target lib/librte_reorder.so.24.1 00:50:35.914 [252/267] Linking target lib/librte_compressdev.so.24.1 00:50:36.173 [253/267] Linking target lib/librte_net.so.24.1 00:50:36.431 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:50:37.367 [255/267] Linking target lib/librte_cmdline.so.24.1 00:50:37.626 [256/267] Linking target lib/librte_cryptodev.so.24.1 00:50:37.885 [257/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:50:38.143 [258/267] Linking target lib/librte_security.so.24.1 00:50:40.671 [259/267] Linking target lib/librte_hash.so.24.1 00:50:40.671 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:50:48.796 [261/267] Linking target lib/librte_ethdev.so.24.1 00:50:48.796 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:50:50.170 [263/267] Linking target lib/librte_power.so.24.1 00:50:54.379 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:50:54.379 [265/267] Linking static target lib/librte_vhost.a 00:50:56.283 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:51:42.998 [267/267] Linking target lib/librte_vhost.so.24.1 00:51:42.998 INFO: autodetecting backend as ninja 00:51:42.998 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:51:43.566 CC lib/ut_mock/mock.o 00:51:43.566 CC lib/ut/ut.o 00:51:43.566 CC lib/log/log.o 00:51:43.566 CC lib/log/log_flags.o 00:51:43.566 CC lib/log/log_deprecated.o 00:51:43.824 LIB libspdk_ut_mock.a 00:51:43.824 LIB libspdk_ut.a 00:51:43.824 LIB libspdk_log.a 00:51:43.824 CC lib/ioat/ioat.o 00:51:43.824 CC lib/util/base64.o 00:51:43.824 CC lib/dma/dma.o 00:51:43.824 CC lib/util/cpuset.o 00:51:43.824 CC lib/util/bit_array.o 00:51:43.824 CC lib/util/crc32.o 00:51:43.824 CC lib/util/crc16.o 00:51:43.824 CC lib/util/crc32c.o 00:51:43.824 CXX lib/trace_parser/trace.o 00:51:44.083 CC lib/vfio_user/host/vfio_user_pci.o 00:51:44.083 CC lib/vfio_user/host/vfio_user.o 00:51:44.083 CC lib/util/crc32_ieee.o 00:51:44.083 CC lib/util/crc64.o 00:51:44.083 LIB libspdk_dma.a 00:51:44.083 CC lib/util/dif.o 00:51:44.083 CC lib/util/fd.o 00:51:44.083 CC lib/util/file.o 00:51:44.083 CC lib/util/hexlify.o 00:51:44.083 LIB libspdk_ioat.a 00:51:44.083 CC lib/util/iov.o 00:51:44.083 CC lib/util/math.o 00:51:44.083 CC lib/util/pipe.o 00:51:44.342 CC lib/util/strerror_tls.o 00:51:44.342 LIB libspdk_vfio_user.a 00:51:44.342 CC lib/util/string.o 00:51:44.342 CC lib/util/uuid.o 00:51:44.342 CC lib/util/fd_group.o 00:51:44.342 CC lib/util/xor.o 00:51:44.342 CC lib/util/zipf.o 00:51:44.601 LIB libspdk_util.a 00:51:44.860 LIB libspdk_trace_parser.a 00:51:44.860 CC lib/env_dpdk/memory.o 00:51:44.860 CC lib/env_dpdk/env.o 00:51:44.860 CC lib/env_dpdk/pci.o 00:51:44.860 CC lib/env_dpdk/init.o 00:51:44.860 CC lib/idxd/idxd.o 00:51:44.860 CC lib/json/json_parse.o 00:51:44.860 CC lib/conf/conf.o 00:51:44.860 CC lib/vmd/vmd.o 00:51:44.860 CC lib/rdma_provider/common.o 00:51:44.860 CC lib/rdma_utils/rdma_utils.o 00:51:45.119 CC lib/rdma_provider/rdma_provider_verbs.o 00:51:45.120 LIB libspdk_rdma_utils.a 00:51:45.120 LIB libspdk_conf.a 00:51:45.120 CC lib/json/json_util.o 00:51:45.120 CC lib/json/json_write.o 00:51:45.120 CC lib/vmd/led.o 00:51:45.120 CC lib/env_dpdk/threads.o 00:51:45.120 CC lib/env_dpdk/pci_ioat.o 00:51:45.120 LIB libspdk_rdma_provider.a 00:51:45.120 CC lib/env_dpdk/pci_virtio.o 00:51:45.120 CC lib/env_dpdk/pci_vmd.o 00:51:45.120 CC lib/idxd/idxd_user.o 00:51:45.120 CC lib/env_dpdk/pci_idxd.o 00:51:45.120 LIB libspdk_vmd.a 00:51:45.120 CC lib/env_dpdk/pci_event.o 00:51:45.120 CC lib/env_dpdk/sigbus_handler.o 00:51:45.378 CC lib/env_dpdk/pci_dpdk.o 00:51:45.378 LIB libspdk_json.a 00:51:45.378 CC lib/env_dpdk/pci_dpdk_2207.o 00:51:45.378 CC lib/env_dpdk/pci_dpdk_2211.o 00:51:45.378 LIB libspdk_idxd.a 00:51:45.378 CC lib/jsonrpc/jsonrpc_server.o 00:51:45.378 CC lib/jsonrpc/jsonrpc_client.o 00:51:45.378 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:51:45.378 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:51:45.636 LIB libspdk_jsonrpc.a 00:51:45.636 LIB libspdk_env_dpdk.a 00:51:45.636 CC lib/rpc/rpc.o 00:51:45.895 LIB libspdk_rpc.a 00:51:45.895 CC lib/trace/trace.o 00:51:45.895 CC lib/trace/trace_flags.o 00:51:45.895 CC lib/trace/trace_rpc.o 00:51:45.895 CC lib/notify/notify.o 00:51:45.895 CC lib/notify/notify_rpc.o 00:51:45.895 CC lib/keyring/keyring.o 00:51:45.895 CC lib/keyring/keyring_rpc.o 00:51:46.154 LIB libspdk_notify.a 00:51:46.154 LIB libspdk_trace.a 00:51:46.154 LIB libspdk_keyring.a 00:51:46.412 CC lib/sock/sock.o 00:51:46.412 CC lib/sock/sock_rpc.o 00:51:46.412 CC lib/thread/iobuf.o 00:51:46.412 CC lib/thread/thread.o 00:51:46.671 LIB libspdk_sock.a 00:51:46.671 CC lib/nvme/nvme_ctrlr_cmd.o 00:51:46.671 CC lib/nvme/nvme_ctrlr.o 00:51:46.671 CC lib/nvme/nvme_fabric.o 00:51:46.671 CC lib/nvme/nvme_ns_cmd.o 00:51:46.671 CC lib/nvme/nvme_qpair.o 00:51:46.671 CC lib/nvme/nvme_pcie.o 00:51:46.671 CC lib/nvme/nvme_ns.o 00:51:46.671 CC lib/nvme/nvme_pcie_common.o 00:51:46.671 CC lib/nvme/nvme.o 00:51:46.930 LIB libspdk_thread.a 00:51:46.930 CC lib/nvme/nvme_quirks.o 00:51:47.189 CC lib/nvme/nvme_transport.o 00:51:47.189 CC lib/nvme/nvme_discovery.o 00:51:47.448 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:51:47.448 CC lib/accel/accel.o 00:51:47.448 CC lib/accel/accel_rpc.o 00:51:47.448 CC lib/virtio/virtio.o 00:51:47.448 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:51:47.448 CC lib/init/json_config.o 00:51:47.448 CC lib/blob/blobstore.o 00:51:47.707 CC lib/init/subsystem.o 00:51:47.707 CC lib/nvme/nvme_tcp.o 00:51:47.707 CC lib/virtio/virtio_vhost_user.o 00:51:47.707 CC lib/accel/accel_sw.o 00:51:47.707 CC lib/blob/request.o 00:51:47.707 CC lib/init/subsystem_rpc.o 00:51:47.707 CC lib/init/rpc.o 00:51:47.966 CC lib/blob/zeroes.o 00:51:47.966 CC lib/virtio/virtio_vfio_user.o 00:51:47.966 CC lib/blob/blob_bs_dev.o 00:51:47.966 CC lib/virtio/virtio_pci.o 00:51:47.966 LIB libspdk_init.a 00:51:47.966 CC lib/nvme/nvme_opal.o 00:51:47.966 CC lib/nvme/nvme_io_msg.o 00:51:47.966 CC lib/nvme/nvme_poll_group.o 00:51:47.966 CC lib/nvme/nvme_zns.o 00:51:47.966 CC lib/nvme/nvme_stubs.o 00:51:47.966 LIB libspdk_virtio.a 00:51:47.966 LIB libspdk_accel.a 00:51:48.236 CC lib/nvme/nvme_auth.o 00:51:48.236 CC lib/nvme/nvme_cuse.o 00:51:48.236 CC lib/event/app.o 00:51:48.237 CC lib/event/reactor.o 00:51:48.502 CC lib/event/log_rpc.o 00:51:48.502 CC lib/event/app_rpc.o 00:51:48.502 CC lib/event/scheduler_static.o 00:51:48.502 CC lib/nvme/nvme_rdma.o 00:51:48.502 LIB libspdk_event.a 00:51:48.502 CC lib/bdev/bdev_rpc.o 00:51:48.502 CC lib/bdev/bdev.o 00:51:48.502 CC lib/bdev/bdev_zone.o 00:51:48.502 CC lib/bdev/part.o 00:51:48.502 CC lib/bdev/scsi_nvme.o 00:51:49.068 LIB libspdk_blob.a 00:51:49.068 LIB libspdk_nvme.a 00:51:49.326 CC lib/blobfs/blobfs.o 00:51:49.326 CC lib/blobfs/tree.o 00:51:49.326 CC lib/lvol/lvol.o 00:51:49.583 LIB libspdk_blobfs.a 00:51:49.841 LIB libspdk_lvol.a 00:51:49.841 LIB libspdk_bdev.a 00:51:50.100 CC lib/scsi/lun.o 00:51:50.100 CC lib/scsi/dev.o 00:51:50.100 CC lib/scsi/scsi.o 00:51:50.100 CC lib/scsi/scsi_bdev.o 00:51:50.100 CC lib/scsi/port.o 00:51:50.100 CC lib/scsi/scsi_pr.o 00:51:50.100 CC lib/scsi/scsi_rpc.o 00:51:50.100 CC lib/nbd/nbd.o 00:51:50.100 CC lib/nvmf/ctrlr.o 00:51:50.100 CC lib/ftl/ftl_core.o 00:51:50.100 CC lib/ftl/ftl_init.o 00:51:50.100 CC lib/nbd/nbd_rpc.o 00:51:50.100 CC lib/scsi/task.o 00:51:50.100 CC lib/ftl/ftl_layout.o 00:51:50.100 CC lib/ftl/ftl_debug.o 00:51:50.358 CC lib/ftl/ftl_io.o 00:51:50.358 CC lib/nvmf/ctrlr_discovery.o 00:51:50.358 CC lib/nvmf/ctrlr_bdev.o 00:51:50.358 CC lib/nvmf/subsystem.o 00:51:50.358 CC lib/nvmf/nvmf.o 00:51:50.358 CC lib/ftl/ftl_sb.o 00:51:50.358 LIB libspdk_nbd.a 00:51:50.358 LIB libspdk_scsi.a 00:51:50.358 CC lib/ftl/ftl_l2p.o 00:51:50.358 CC lib/ftl/ftl_l2p_flat.o 00:51:50.358 CC lib/ftl/ftl_nv_cache.o 00:51:50.358 CC lib/nvmf/nvmf_rpc.o 00:51:50.617 CC lib/nvmf/transport.o 00:51:50.617 CC lib/nvmf/tcp.o 00:51:50.617 CC lib/nvmf/stubs.o 00:51:50.617 CC lib/iscsi/conn.o 00:51:50.617 CC lib/vhost/vhost.o 00:51:50.617 CC lib/vhost/vhost_rpc.o 00:51:50.617 CC lib/nvmf/mdns_server.o 00:51:50.875 CC lib/ftl/ftl_band.o 00:51:50.875 CC lib/ftl/ftl_band_ops.o 00:51:50.875 CC lib/ftl/ftl_writer.o 00:51:50.875 CC lib/ftl/ftl_rq.o 00:51:50.875 CC lib/iscsi/init_grp.o 00:51:50.875 CC lib/iscsi/iscsi.o 00:51:50.875 CC lib/iscsi/md5.o 00:51:51.133 CC lib/iscsi/param.o 00:51:51.133 CC lib/iscsi/portal_grp.o 00:51:51.133 CC lib/iscsi/tgt_node.o 00:51:51.133 CC lib/ftl/ftl_reloc.o 00:51:51.133 CC lib/iscsi/iscsi_subsystem.o 00:51:51.133 CC lib/vhost/vhost_scsi.o 00:51:51.133 CC lib/vhost/vhost_blk.o 00:51:51.133 CC lib/ftl/ftl_l2p_cache.o 00:51:51.133 CC lib/ftl/ftl_p2l.o 00:51:51.390 CC lib/ftl/mngt/ftl_mngt.o 00:51:51.390 CC lib/nvmf/rdma.o 00:51:51.390 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:51:51.390 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:51:51.390 CC lib/ftl/mngt/ftl_mngt_startup.o 00:51:51.390 CC lib/iscsi/iscsi_rpc.o 00:51:51.390 CC lib/iscsi/task.o 00:51:51.390 CC lib/ftl/mngt/ftl_mngt_md.o 00:51:51.390 CC lib/vhost/rte_vhost_user.o 00:51:51.390 CC lib/ftl/mngt/ftl_mngt_misc.o 00:51:51.649 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:51:51.649 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:51:51.649 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:51:51.649 CC lib/ftl/mngt/ftl_mngt_band.o 00:51:51.649 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:51:51.649 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:51:51.649 LIB libspdk_iscsi.a 00:51:51.908 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:51:51.908 CC lib/ftl/utils/ftl_conf.o 00:51:51.908 CC lib/ftl/utils/ftl_md.o 00:51:51.908 CC lib/ftl/utils/ftl_mempool.o 00:51:51.908 CC lib/ftl/utils/ftl_bitmap.o 00:51:51.908 CC lib/ftl/utils/ftl_property.o 00:51:51.908 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:51:51.908 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:51:51.908 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:51:51.908 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:51:51.908 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:51:52.167 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:51:52.167 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:51:52.167 CC lib/ftl/upgrade/ftl_sb_v3.o 00:51:52.167 CC lib/ftl/upgrade/ftl_sb_v5.o 00:51:52.167 CC lib/ftl/nvc/ftl_nvc_dev.o 00:51:52.167 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:51:52.167 LIB libspdk_vhost.a 00:51:52.167 CC lib/ftl/base/ftl_base_dev.o 00:51:52.167 CC lib/ftl/base/ftl_base_bdev.o 00:51:52.167 LIB libspdk_nvmf.a 00:51:52.426 LIB libspdk_ftl.a 00:51:52.684 CC module/env_dpdk/env_dpdk_rpc.o 00:51:52.684 CC module/accel/dsa/accel_dsa.o 00:51:52.684 CC module/accel/iaa/accel_iaa.o 00:51:52.684 CC module/keyring/file/keyring.o 00:51:52.684 CC module/keyring/linux/keyring.o 00:51:52.684 CC module/accel/error/accel_error.o 00:51:52.684 CC module/scheduler/dynamic/scheduler_dynamic.o 00:51:52.684 CC module/sock/posix/posix.o 00:51:52.684 CC module/blob/bdev/blob_bdev.o 00:51:52.684 CC module/accel/ioat/accel_ioat.o 00:51:52.684 LIB libspdk_env_dpdk_rpc.a 00:51:52.684 CC module/keyring/linux/keyring_rpc.o 00:51:52.684 CC module/keyring/file/keyring_rpc.o 00:51:52.684 CC module/accel/iaa/accel_iaa_rpc.o 00:51:52.684 CC module/accel/error/accel_error_rpc.o 00:51:52.943 LIB libspdk_scheduler_dynamic.a 00:51:52.943 CC module/accel/ioat/accel_ioat_rpc.o 00:51:52.943 CC module/accel/dsa/accel_dsa_rpc.o 00:51:52.943 LIB libspdk_keyring_linux.a 00:51:52.943 LIB libspdk_blob_bdev.a 00:51:52.943 LIB libspdk_keyring_file.a 00:51:52.944 LIB libspdk_accel_iaa.a 00:51:52.944 LIB libspdk_accel_error.a 00:51:52.944 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:51:52.944 LIB libspdk_accel_ioat.a 00:51:52.944 LIB libspdk_accel_dsa.a 00:51:52.944 CC module/scheduler/gscheduler/gscheduler.o 00:51:52.944 CC module/bdev/error/vbdev_error.o 00:51:52.944 CC module/bdev/delay/vbdev_delay.o 00:51:52.944 LIB libspdk_scheduler_dpdk_governor.a 00:51:52.944 CC module/blobfs/bdev/blobfs_bdev.o 00:51:52.944 CC module/bdev/lvol/vbdev_lvol.o 00:51:52.944 CC module/bdev/malloc/bdev_malloc.o 00:51:52.944 CC module/bdev/null/bdev_null.o 00:51:52.944 CC module/bdev/null/bdev_null_rpc.o 00:51:53.203 CC module/bdev/gpt/gpt.o 00:51:53.203 LIB libspdk_sock_posix.a 00:51:53.203 LIB libspdk_scheduler_gscheduler.a 00:51:53.203 CC module/bdev/gpt/vbdev_gpt.o 00:51:53.203 CC module/bdev/malloc/bdev_malloc_rpc.o 00:51:53.203 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:51:53.203 CC module/bdev/error/vbdev_error_rpc.o 00:51:53.203 CC module/bdev/delay/vbdev_delay_rpc.o 00:51:53.203 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:51:53.203 LIB libspdk_bdev_null.a 00:51:53.203 LIB libspdk_bdev_gpt.a 00:51:53.203 LIB libspdk_bdev_malloc.a 00:51:53.462 LIB libspdk_bdev_delay.a 00:51:53.462 LIB libspdk_bdev_error.a 00:51:53.462 LIB libspdk_blobfs_bdev.a 00:51:53.462 CC module/bdev/passthru/vbdev_passthru.o 00:51:53.462 CC module/bdev/nvme/bdev_nvme.o 00:51:53.462 CC module/bdev/nvme/bdev_nvme_rpc.o 00:51:53.462 CC module/bdev/raid/bdev_raid.o 00:51:53.462 CC module/bdev/raid/bdev_raid_rpc.o 00:51:53.462 CC module/bdev/split/vbdev_split.o 00:51:53.462 LIB libspdk_bdev_lvol.a 00:51:53.462 CC module/bdev/zone_block/vbdev_zone_block.o 00:51:53.462 CC module/bdev/aio/bdev_aio.o 00:51:53.462 CC module/bdev/ftl/bdev_ftl.o 00:51:53.462 CC module/bdev/split/vbdev_split_rpc.o 00:51:53.462 CC module/bdev/nvme/nvme_rpc.o 00:51:53.720 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:51:53.720 CC module/bdev/nvme/bdev_mdns_client.o 00:51:53.720 LIB libspdk_bdev_split.a 00:51:53.720 CC module/bdev/ftl/bdev_ftl_rpc.o 00:51:53.720 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:51:53.720 CC module/bdev/raid/bdev_raid_sb.o 00:51:53.720 CC module/bdev/aio/bdev_aio_rpc.o 00:51:53.720 CC module/bdev/raid/raid0.o 00:51:53.720 LIB libspdk_bdev_passthru.a 00:51:53.720 CC module/bdev/raid/raid1.o 00:51:53.720 CC module/bdev/raid/concat.o 00:51:53.720 CC module/bdev/raid/raid5f.o 00:51:53.720 LIB libspdk_bdev_zone_block.a 00:51:53.979 LIB libspdk_bdev_ftl.a 00:51:53.979 LIB libspdk_bdev_aio.a 00:51:53.979 CC module/bdev/nvme/vbdev_opal.o 00:51:53.979 CC module/bdev/nvme/vbdev_opal_rpc.o 00:51:53.979 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:51:53.979 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:51:53.979 CC module/bdev/iscsi/bdev_iscsi.o 00:51:53.979 CC module/bdev/virtio/bdev_virtio_scsi.o 00:51:53.979 CC module/bdev/virtio/bdev_virtio_blk.o 00:51:53.979 CC module/bdev/virtio/bdev_virtio_rpc.o 00:51:53.979 LIB libspdk_bdev_raid.a 00:51:54.237 LIB libspdk_bdev_iscsi.a 00:51:54.237 LIB libspdk_bdev_virtio.a 00:51:54.495 LIB libspdk_bdev_nvme.a 00:51:54.753 CC module/event/subsystems/vmd/vmd_rpc.o 00:51:54.753 CC module/event/subsystems/vmd/vmd.o 00:51:54.753 CC module/event/subsystems/scheduler/scheduler.o 00:51:54.753 CC module/event/subsystems/iobuf/iobuf.o 00:51:54.753 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:51:54.753 CC module/event/subsystems/keyring/keyring.o 00:51:54.753 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:51:54.753 CC module/event/subsystems/sock/sock.o 00:51:55.011 LIB libspdk_event_vhost_blk.a 00:51:55.011 LIB libspdk_event_scheduler.a 00:51:55.011 LIB libspdk_event_vmd.a 00:51:55.011 LIB libspdk_event_keyring.a 00:51:55.011 LIB libspdk_event_iobuf.a 00:51:55.011 LIB libspdk_event_sock.a 00:51:55.011 CC module/event/subsystems/accel/accel.o 00:51:55.269 LIB libspdk_event_accel.a 00:51:55.269 CC module/event/subsystems/bdev/bdev.o 00:51:55.526 LIB libspdk_event_bdev.a 00:51:55.784 CC module/event/subsystems/scsi/scsi.o 00:51:55.784 CC module/event/subsystems/nbd/nbd.o 00:51:55.784 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:51:55.784 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:51:55.784 LIB libspdk_event_nbd.a 00:51:55.784 LIB libspdk_event_scsi.a 00:51:55.784 LIB libspdk_event_nvmf.a 00:51:56.042 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:51:56.042 CC module/event/subsystems/iscsi/iscsi.o 00:51:56.042 LIB libspdk_event_vhost_scsi.a 00:51:56.042 LIB libspdk_event_iscsi.a 00:51:56.300 CC test/rpc_client/rpc_client_test.o 00:51:56.300 CXX app/trace/trace.o 00:51:56.300 CC app/trace_record/trace_record.o 00:51:56.300 TEST_HEADER include/spdk/config.h 00:51:56.300 CXX test/cpp_headers/ioat.o 00:51:56.300 CC app/nvmf_tgt/nvmf_main.o 00:51:56.300 CC test/thread/poller_perf/poller_perf.o 00:51:56.558 CC test/dma/test_dma/test_dma.o 00:51:56.558 CC examples/util/zipf/zipf.o 00:51:56.558 LINK rpc_client_test 00:51:56.558 CC test/env/mem_callbacks/mem_callbacks.o 00:51:56.558 CC test/app/bdev_svc/bdev_svc.o 00:51:56.558 CXX test/cpp_headers/blobfs.o 00:51:56.558 LINK poller_perf 00:51:56.558 LINK spdk_trace_record 00:51:56.558 LINK nvmf_tgt 00:51:56.558 LINK zipf 00:51:56.558 LINK bdev_svc 00:51:56.816 CXX test/cpp_headers/notify.o 00:51:56.816 LINK test_dma 00:51:56.816 LINK spdk_trace 00:51:56.816 CXX test/cpp_headers/pipe.o 00:51:56.816 LINK mem_callbacks 00:51:57.075 CXX test/cpp_headers/accel.o 00:51:57.333 CXX test/cpp_headers/file.o 00:51:57.900 CXX test/cpp_headers/version.o 00:51:57.900 CXX test/cpp_headers/trace_parser.o 00:51:58.835 CXX test/cpp_headers/opal_spec.o 00:51:59.792 CXX test/cpp_headers/uuid.o 00:52:00.358 CXX test/cpp_headers/likely.o 00:52:01.291 CXX test/cpp_headers/dif.o 00:52:01.857 CXX test/cpp_headers/keyring_module.o 00:52:02.790 CXX test/cpp_headers/memory.o 00:52:03.357 CXX test/cpp_headers/vfio_user_pci.o 00:52:04.293 CXX test/cpp_headers/dma.o 00:52:04.860 CXX test/cpp_headers/nbd.o 00:52:04.860 CXX test/cpp_headers/conf.o 00:52:05.428 CXX test/cpp_headers/env_dpdk.o 00:52:05.996 CXX test/cpp_headers/nvmf_spec.o 00:52:05.996 CC test/thread/lock/spdk_lock.o 00:52:06.564 CXX test/cpp_headers/iscsi_spec.o 00:52:07.132 CXX test/cpp_headers/mmio.o 00:52:07.699 CXX test/cpp_headers/json.o 00:52:07.958 CXX test/cpp_headers/opal.o 00:52:08.533 CXX test/cpp_headers/bdev.o 00:52:09.131 CXX test/cpp_headers/keyring.o 00:52:09.131 LINK spdk_lock 00:52:09.131 CC app/iscsi_tgt/iscsi_tgt.o 00:52:09.131 CC examples/ioat/perf/perf.o 00:52:09.698 CC examples/vmd/lsvmd/lsvmd.o 00:52:09.698 CXX test/cpp_headers/base64.o 00:52:09.698 LINK iscsi_tgt 00:52:09.957 LINK ioat_perf 00:52:09.957 LINK lsvmd 00:52:10.525 CXX test/cpp_headers/blobfs_bdev.o 00:52:11.462 CXX test/cpp_headers/nvme_ocssd.o 00:52:12.399 CXX test/cpp_headers/fd.o 00:52:12.966 CXX test/cpp_headers/barrier.o 00:52:12.966 CC test/env/vtophys/vtophys.o 00:52:13.535 CXX test/cpp_headers/scsi_spec.o 00:52:13.794 LINK vtophys 00:52:14.730 CXX test/cpp_headers/zipf.o 00:52:15.667 CXX test/cpp_headers/nvmf.o 00:52:16.602 CXX test/cpp_headers/queue.o 00:52:16.861 CXX test/cpp_headers/xor.o 00:52:17.798 CXX test/cpp_headers/cpuset.o 00:52:18.364 CXX test/cpp_headers/thread.o 00:52:19.297 CC examples/vmd/led/led.o 00:52:19.297 CXX test/cpp_headers/bdev_zone.o 00:52:20.230 LINK led 00:52:20.487 CXX test/cpp_headers/fd_group.o 00:52:21.863 CXX test/cpp_headers/tree.o 00:52:21.863 CXX test/cpp_headers/blob_bdev.o 00:52:23.239 CXX test/cpp_headers/crc64.o 00:52:24.615 CXX test/cpp_headers/assert.o 00:52:25.993 CXX test/cpp_headers/nvme_spec.o 00:52:27.371 CXX test/cpp_headers/endian.o 00:52:28.318 CXX test/cpp_headers/pci_ids.o 00:52:29.691 CXX test/cpp_headers/log.o 00:52:31.068 CXX test/cpp_headers/nvme_ocssd_spec.o 00:52:32.970 CXX test/cpp_headers/ftl.o 00:52:34.346 CXX test/cpp_headers/config.o 00:52:34.604 CXX test/cpp_headers/vhost.o 00:52:35.981 CXX test/cpp_headers/bdev_module.o 00:52:37.355 CXX test/cpp_headers/nvme_intel.o 00:52:38.730 CXX test/cpp_headers/idxd_spec.o 00:52:39.664 CXX test/cpp_headers/crc16.o 00:52:41.069 CC examples/ioat/verify/verify.o 00:52:41.069 CXX test/cpp_headers/nvme.o 00:52:42.007 CXX test/cpp_headers/stdinc.o 00:52:42.266 LINK verify 00:52:43.202 CXX test/cpp_headers/scsi.o 00:52:44.578 CXX test/cpp_headers/nvmf_fc_spec.o 00:52:45.955 CXX test/cpp_headers/idxd.o 00:52:46.889 CXX test/cpp_headers/hexlify.o 00:52:46.889 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:52:48.263 LINK env_dpdk_post_init 00:52:48.263 CXX test/cpp_headers/reduce.o 00:52:49.196 CXX test/cpp_headers/crc32.o 00:52:50.571 CXX test/cpp_headers/init.o 00:52:51.507 CXX test/cpp_headers/nvmf_transport.o 00:52:52.442 CC test/env/memory/memory_ut.o 00:52:53.009 CXX test/cpp_headers/nvme_zns.o 00:52:54.388 CXX test/cpp_headers/vfio_user_spec.o 00:52:55.325 CXX test/cpp_headers/util.o 00:52:56.262 CXX test/cpp_headers/jsonrpc.o 00:52:56.830 LINK memory_ut 00:52:57.089 CXX test/cpp_headers/env.o 00:52:58.026 CC test/env/pci/pci_ut.o 00:52:58.026 CXX test/cpp_headers/nvmf_cmd.o 00:52:59.947 CXX test/cpp_headers/lvol.o 00:52:59.947 LINK pci_ut 00:53:00.883 CXX test/cpp_headers/histogram_data.o 00:53:02.260 CXX test/cpp_headers/event.o 00:53:03.638 CXX test/cpp_headers/trace.o 00:53:04.574 CXX test/cpp_headers/ioat_spec.o 00:53:05.949 CXX test/cpp_headers/string.o 00:53:06.884 CXX test/cpp_headers/ublk.o 00:53:08.259 CXX test/cpp_headers/bit_array.o 00:53:08.825 CXX test/cpp_headers/scheduler.o 00:53:09.760 CXX test/cpp_headers/blob.o 00:53:10.019 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:53:10.954 CXX test/cpp_headers/gpt_spec.o 00:53:11.213 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:53:12.148 CXX test/cpp_headers/sock.o 00:53:12.148 LINK nvme_fuzz 00:53:13.139 CXX test/cpp_headers/vmd.o 00:53:14.115 CXX test/cpp_headers/rpc.o 00:53:15.052 CXX test/cpp_headers/accel_module.o 00:53:16.431 CXX test/cpp_headers/bit_pool.o 00:53:16.999 CC examples/idxd/perf/perf.o 00:53:16.999 LINK iscsi_fuzz 00:53:17.937 CC examples/interrupt_tgt/interrupt_tgt.o 00:53:18.196 LINK idxd_perf 00:53:18.762 LINK interrupt_tgt 00:53:26.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:53:26.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:53:29.430 LINK vhost_fuzz 00:53:39.402 CC test/event/event_perf/event_perf.o 00:53:39.661 LINK event_perf 00:54:01.593 CC test/event/reactor/reactor.o 00:54:01.593 LINK reactor 00:54:23.525 CC test/event/reactor_perf/reactor_perf.o 00:54:23.525 CC test/event/app_repeat/app_repeat.o 00:54:23.525 LINK app_repeat 00:54:23.525 LINK reactor_perf 00:54:25.429 CC test/event/scheduler/scheduler.o 00:54:26.366 LINK scheduler 00:54:26.624 CC test/accel/dif/dif.o 00:54:27.191 CC test/app/histogram_perf/histogram_perf.o 00:54:27.758 LINK histogram_perf 00:54:28.692 LINK dif 00:54:32.976 CC examples/thread/thread/thread_ex.o 00:54:33.914 LINK thread 00:54:38.105 CC examples/sock/hello_world/hello_sock.o 00:54:39.040 LINK hello_sock 00:54:39.975 CC app/spdk_tgt/spdk_tgt.o 00:54:41.351 LINK spdk_tgt 00:54:53.555 CC test/app/jsoncat/jsoncat.o 00:54:53.555 CC test/blobfs/mkfs/mkfs.o 00:54:53.555 LINK jsoncat 00:54:53.555 LINK mkfs 00:54:55.460 CC test/lvol/esnap/esnap.o 00:54:56.028 CC test/nvme/aer/aer.o 00:54:57.407 LINK aer 00:55:15.490 LINK esnap 00:55:16.425 CC test/app/stub/stub.o 00:55:16.684 CC test/nvme/reset/reset.o 00:55:17.251 LINK stub 00:55:18.187 CC test/nvme/sgl/sgl.o 00:55:18.187 LINK reset 00:55:19.560 LINK sgl 00:56:06.252 CC test/nvme/e2edp/nvme_dp.o 00:56:06.252 LINK nvme_dp 00:56:06.252 CC test/nvme/overhead/overhead.o 00:56:07.672 LINK overhead 00:56:34.211 CC test/nvme/err_injection/err_injection.o 00:56:34.211 CC test/nvme/startup/startup.o 00:56:34.779 LINK err_injection 00:56:35.349 LINK startup 00:56:35.608 CC test/nvme/reserve/reserve.o 00:56:37.513 LINK reserve 00:57:09.599 CC test/nvme/simple_copy/simple_copy.o 00:57:09.858 LINK simple_copy 00:57:12.395 CC examples/nvme/hello_world/hello_world.o 00:57:13.773 LINK hello_world 00:57:14.710 CC test/nvme/connect_stress/connect_stress.o 00:57:15.647 LINK connect_stress 00:57:16.217 CC test/nvme/boot_partition/boot_partition.o 00:57:17.153 LINK boot_partition 00:57:22.423 CC test/nvme/compliance/nvme_compliance.o 00:57:24.328 LINK nvme_compliance 00:57:34.301 CC examples/accel/perf/accel_perf.o 00:57:35.679 LINK accel_perf 00:57:37.118 CC examples/blob/hello_world/hello_blob.o 00:57:37.118 CC examples/blob/cli/blobcli.o 00:57:38.055 LINK hello_blob 00:57:39.441 LINK blobcli 00:57:44.714 CC examples/nvme/reconnect/reconnect.o 00:57:46.092 LINK reconnect 00:58:08.025 CC examples/nvme/nvme_manage/nvme_manage.o 00:58:08.589 LINK nvme_manage 00:58:09.525 CC test/nvme/fused_ordering/fused_ordering.o 00:58:10.092 CC examples/nvme/arbitration/arbitration.o 00:58:10.351 LINK fused_ordering 00:58:11.287 LINK arbitration 00:58:11.546 CC examples/nvme/hotplug/hotplug.o 00:58:12.481 LINK hotplug 00:58:12.481 CC app/spdk_lspci/spdk_lspci.o 00:58:13.048 LINK spdk_lspci 00:58:15.576 CC examples/nvme/cmb_copy/cmb_copy.o 00:58:16.143 LINK cmb_copy 00:58:28.346 CC examples/nvme/abort/abort.o 00:58:29.721 LINK abort 00:58:47.810 CC examples/bdev/hello_world/hello_bdev.o 00:58:48.085 LINK hello_bdev 00:59:10.039 CC test/nvme/doorbell_aers/doorbell_aers.o 00:59:10.973 LINK doorbell_aers 00:59:12.349 CC test/nvme/fdp/fdp.o 00:59:13.286 CC app/spdk_nvme_perf/perf.o 00:59:13.286 LINK fdp 00:59:14.666 CC app/spdk_nvme_identify/identify.o 00:59:15.234 CC test/nvme/cuse/cuse.o 00:59:15.492 LINK spdk_nvme_perf 00:59:16.868 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:59:17.127 LINK spdk_nvme_identify 00:59:17.695 LINK pmr_persistence 00:59:19.598 LINK cuse 00:59:29.599 CC app/spdk_nvme_discover/discovery_aer.o 00:59:30.165 LINK spdk_nvme_discover 00:59:34.354 CC app/spdk_top/spdk_top.o 00:59:38.552 LINK spdk_top 01:00:10.636 CC examples/bdev/bdevperf/bdevperf.o 01:00:11.202 CC app/vhost/vhost.o 01:00:11.460 CC app/spdk_dd/spdk_dd.o 01:00:12.028 LINK vhost 01:00:12.618 LINK spdk_dd 01:00:13.211 LINK bdevperf 01:00:13.469 CC test/bdev/bdevio/bdevio.o 01:00:14.405 CC app/fio/nvme/fio_plugin.o 01:00:14.405 LINK bdevio 01:00:14.664 LINK spdk_nvme 01:00:17.199 CC app/fio/bdev/fio_plugin.o 01:00:17.767 LINK spdk_bdev 01:02:24.234 CC examples/nvmf/nvmf/nvmf.o 01:02:24.234 LINK nvmf 01:02:42.316 12:14:14 -- spdk/autopackage.sh@44 -- $ make -j10 clean 01:02:42.316 make[1]: Nothing to be done for 'clean'. 01:02:44.852 12:14:19 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 01:02:44.852 12:14:19 -- common/autotest_common.sh@728 -- $ xtrace_disable 01:02:44.852 12:14:19 -- common/autotest_common.sh@10 -- $ set +x 01:02:44.852 12:14:19 -- spdk/autopackage.sh@48 -- $ timing_finish 01:02:44.852 12:14:19 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 01:02:44.852 12:14:19 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 01:02:44.852 12:14:19 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 01:02:44.852 12:14:19 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 01:02:44.852 12:14:19 -- pm/common@29 -- $ signal_monitor_resources TERM 01:02:44.852 12:14:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 01:02:44.852 12:14:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:44.852 12:14:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 01:02:44.852 12:14:19 -- pm/common@44 -- $ pid=181426 01:02:44.852 12:14:19 -- pm/common@50 -- $ kill -TERM 181426 01:02:44.852 12:14:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 01:02:44.852 12:14:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 01:02:44.852 12:14:19 -- pm/common@44 -- $ pid=181427 01:02:44.852 12:14:19 -- pm/common@50 -- $ kill -TERM 181427 01:02:44.852 + [[ -n 2384 ]] 01:02:44.852 + sudo kill 2384 01:02:44.852 sudo: /etc/sudoers.d/99-spdk-rlimits:1 unknown defaults entry "rlimit_core" 01:02:45.119 [Pipeline] } 01:02:45.138 [Pipeline] // timeout 01:02:45.143 [Pipeline] } 01:02:45.159 [Pipeline] // stage 01:02:45.164 [Pipeline] } 01:02:45.181 [Pipeline] // catchError 01:02:45.195 [Pipeline] stage 01:02:45.206 [Pipeline] { (Stop VM) 01:02:45.248 [Pipeline] sh 01:02:45.520 + vagrant halt 01:02:48.802 ==> default: Halting domain... 01:02:58.784 [Pipeline] sh 01:02:59.058 + vagrant destroy -f 01:03:01.643 ==> default: Removing domain... 01:03:02.233 [Pipeline] sh 01:03:02.507 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest/output 01:03:02.515 [Pipeline] } 01:03:02.533 [Pipeline] // stage 01:03:02.537 [Pipeline] } 01:03:02.557 [Pipeline] // dir 01:03:02.562 [Pipeline] } 01:03:02.578 [Pipeline] // wrap 01:03:02.583 [Pipeline] } 01:03:02.597 [Pipeline] // catchError 01:03:02.605 [Pipeline] stage 01:03:02.607 [Pipeline] { (Epilogue) 01:03:02.621 [Pipeline] sh 01:03:02.900 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 01:03:20.993 [Pipeline] catchError 01:03:20.995 [Pipeline] { 01:03:21.008 [Pipeline] sh 01:03:21.286 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 01:03:21.545 Artifacts sizes are good 01:03:21.554 [Pipeline] } 01:03:21.571 [Pipeline] // catchError 01:03:21.583 [Pipeline] archiveArtifacts 01:03:21.590 Archiving artifacts 01:03:21.941 [Pipeline] cleanWs 01:03:21.951 [WS-CLEANUP] Deleting project workspace... 01:03:21.952 [WS-CLEANUP] Deferred wipeout is used... 01:03:21.957 [WS-CLEANUP] done 01:03:21.959 [Pipeline] } 01:03:21.977 [Pipeline] // stage 01:03:21.984 [Pipeline] } 01:03:22.003 [Pipeline] // node 01:03:22.009 [Pipeline] End of Pipeline 01:03:22.046 Finished: SUCCESS